Tuesday, 28 May 2024

Creating a Continuous Integration (CI) Pipeline in Azure DevOps(LAB)

 

Creating a Continuous Integration (CI) Pipeline in Azure DevOps

One of the major points of doing microservices and containers is avoiding the whole “it works on my machine” concept. While containers are a vehicle for achieving this, we also need a vehicle for carrying the container from the developer workstation to other machines. In this guide that vehicle is Azure DevOps, the artist formerly known as Visual Studio Team Services (VSTS).

DevOps as a term encompasses more than just the technical pieces, but this guide will focus only on a few isolated components. The first part is Continuous Integration (CI) which handles building of the code, and the second part is Continuous Deployment (CD) which is about deploying the code built by CI. CD will be covered in the next section.

Let’s start by creating a new Dockerfile specifically for CI. Add a new file called Dockerfile.CI with the following contents:

FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layersCOPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet:2.2-sdk
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "AKS-Web-App.dll"]

You will notice that Visual Studio arranges it as a child node under the existing Dockerfile.

We want to use Helm for packaging and deploying, and for this we need a helm chart. Create a default by executing the command below:

It is worth mentioning that there is an extension currently in preview for Visual Studio called Visual Studio Tools for Kubernetes that will automate this step for you. It is recommended to check this out, but for a deeper understanding this guide has chosen to not use this extension.

Why Helm is needed might not be clear at this point, but it adds a few extra configuration abilities. The following is a recap of the configuration files for a service:

  • Dockerfiles. These describe a single container with low-level details like base image, port openings, etc.
  • Docker-Compose. These describe a collection of containers that logically belong together. For example, having both a database container and an API container.
  • Helm charts. These typically describe additional metadata for the services, like the external url, the number of replicas, etc.

While it is not a requirement to use all three levels of configuration, it does make some things easier.

The default helm chart will actually not deploy the code you have in your VS solution, but instead an nginx container so a few adjustments will be needed. The helm charts have a templating system, but the important parts are in the values.yaml file. A simple file for this service would look like this:

# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1image:
repository: aksdotnetacr.azurecr.io/aksdotnetcoder
tag: latest
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
path: /
hosts:
- aksdotnetcoder
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}

Check in your code and return to the Azure DevOps Portal.

If you go to Pipelines => Build you will have a rather blank space, so click New pipeline to get started.
Since the code is already checked into Azure Repos that will be the logical choice. In the next step choose Empty job as the template. There are default templates for Kubernetes, but there is a lack of visibility of what goes into them before firing it up, so this guide builds them manually by starting from scratch.
Give the pipeline a name and select Hosted Linux as the agent pool.
Start by adding two Docker tasks
The first Docker task is building the image. Select the Azure subscription and the Azure Container Registry created previously. Check the boxes as seen in the image, and make sure to add $(Build.BuildID) as additional image tags. If this task looks differently for you it might be that the Docker version has defaulted to 1.* whereas this sample uses 0.*.
The second Docker task is about pushing the image to ACR.
Next up is adding the necessary Helm tasks. Add one installer, and two package and deploy.
Version 2.12.3 is the latest at the time of writing, but it will automatically check each time it runs for new versions.
The first Helm task is a helm init.
The second Helm task is helm package.
To wrap up the build process it is necessary to add a Publish Build Artifacts task.
To simplify things during deploy add $(Build.BuildId) to the artifact name.
This should make the final CI Pipeline similar to the following screenshot:
To finish the CI process click Save & Queue to trigger a build.
Add some comments if you like, or just click Save & Queue.
If everything passes with flying colors it should be all green checkmarks.
If the results are satisfactory move on to building the CD pipeline by clicking Release in the header above the output.

Creating a Continuous Deployment (CD) Pipeline in Azure DevOps

A dev might be more than happy to see the CI pipeline finish in the green, but code that compiles isn’t worth much if you’re not able to deploy it, so the next part is about building a second pipeline to take care of that.

Go to Builds => Releases and create a new pipeline. This time also choosing the Empty job template.

Leave Stage 1 with the defaults.
Since you started based on a build pipeline you will already have an artifact step in addition to a stage.
Make sure that the artifacts look right and are linked to the build pipeline.
Go to Stage 1 and add one helm installer and two helm charts.
Install the latest helm version.
The first helm task is a helm init. This has a twist though. Instead of using the Azure Resource Manager connection type, (used in the CI pipeline), it is required to use a Kubernetes Service Connection. (This is due to RBAC being enabled.) This screenshot shows the result, but initially you need to click +New to create the connection.
Choose Service account as authentication method.
The server URL can be found by browsing to the AKS deployment in the Azure Portal and copying the API server address. Make sure to prefix with https:// when pasting it in.

To acquire token and certificate you need to run the two kubectl commands listed:
kubectl get -n kube-system serviceaccount tiller -o yaml
kubectl get -n kube-system secret tiller-token-xyz -o yaml

The second kubectl will give you an output with two Base64-encoded strings containing the token and certificate. Copy and paste these into the form and hit OK.


Note: the UI includes the kubectl command without the namespace (kube-system) which means you will get an error that the service account cannot be found.

You can then define the second helm task. Reuse the Kubernetes Service Connection from the previous task. Make sure you choose File path as Chart Type and that the path contains /**/*.tgz at the end.


Hit Save followed by Release.


Make sure the Version aligns with your last build.


The UI should indicate the progress of the deployment.


Let it work its magic and watch things proceed to Succeeded.


You can click the task log and peek inside the helm upgrade log to see a kubectl output for verification.


Jumping back to the command line you can run kubectl get all to verify things outside the Azure DevOps UI.


The output of a working CD pipeline.


We’ve come a long way, but there are a few loose threads. These will be tied up in the next, and concluding, part.

No comments:

Post a Comment