Tuesday, 28 May 2024

Creating a Continuous Integration (CI) Pipeline in Azure DevOps(LAB)

 

Creating a Continuous Integration (CI) Pipeline in Azure DevOps

One of the major points of doing microservices and containers is avoiding the whole “it works on my machine” concept. While containers are a vehicle for achieving this, we also need a vehicle for carrying the container from the developer workstation to other machines. In this guide that vehicle is Azure DevOps, the artist formerly known as Visual Studio Team Services (VSTS).

DevOps as a term encompasses more than just the technical pieces, but this guide will focus only on a few isolated components. The first part is Continuous Integration (CI) which handles building of the code, and the second part is Continuous Deployment (CD) which is about deploying the code built by CI. CD will be covered in the next section.

Let’s start by creating a new Dockerfile specifically for CI. Add a new file called Dockerfile.CI with the following contents:

FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /app
# Copy csproj and restore as distinct layersCOPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet:2.2-sdk
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "AKS-Web-App.dll"]

You will notice that Visual Studio arranges it as a child node under the existing Dockerfile.

We want to use Helm for packaging and deploying, and for this we need a helm chart. Create a default by executing the command below:

It is worth mentioning that there is an extension currently in preview for Visual Studio called Visual Studio Tools for Kubernetes that will automate this step for you. It is recommended to check this out, but for a deeper understanding this guide has chosen to not use this extension.

Why Helm is needed might not be clear at this point, but it adds a few extra configuration abilities. The following is a recap of the configuration files for a service:

  • Dockerfiles. These describe a single container with low-level details like base image, port openings, etc.
  • Docker-Compose. These describe a collection of containers that logically belong together. For example, having both a database container and an API container.
  • Helm charts. These typically describe additional metadata for the services, like the external url, the number of replicas, etc.

While it is not a requirement to use all three levels of configuration, it does make some things easier.

The default helm chart will actually not deploy the code you have in your VS solution, but instead an nginx container so a few adjustments will be needed. The helm charts have a templating system, but the important parts are in the values.yaml file. A simple file for this service would look like this:

# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1image:
repository: aksdotnetacr.azurecr.io/aksdotnetcoder
tag: latest
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
path: /
hosts:
- aksdotnetcoder
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}

Check in your code and return to the Azure DevOps Portal.

If you go to Pipelines => Build you will have a rather blank space, so click New pipeline to get started.
Since the code is already checked into Azure Repos that will be the logical choice. In the next step choose Empty job as the template. There are default templates for Kubernetes, but there is a lack of visibility of what goes into them before firing it up, so this guide builds them manually by starting from scratch.
Give the pipeline a name and select Hosted Linux as the agent pool.
Start by adding two Docker tasks
The first Docker task is building the image. Select the Azure subscription and the Azure Container Registry created previously. Check the boxes as seen in the image, and make sure to add $(Build.BuildID) as additional image tags. If this task looks differently for you it might be that the Docker version has defaulted to 1.* whereas this sample uses 0.*.
The second Docker task is about pushing the image to ACR.
Next up is adding the necessary Helm tasks. Add one installer, and two package and deploy.
Version 2.12.3 is the latest at the time of writing, but it will automatically check each time it runs for new versions.
The first Helm task is a helm init.
The second Helm task is helm package.
To wrap up the build process it is necessary to add a Publish Build Artifacts task.
To simplify things during deploy add $(Build.BuildId) to the artifact name.
This should make the final CI Pipeline similar to the following screenshot:
To finish the CI process click Save & Queue to trigger a build.
Add some comments if you like, or just click Save & Queue.
If everything passes with flying colors it should be all green checkmarks.
If the results are satisfactory move on to building the CD pipeline by clicking Release in the header above the output.

Creating a Continuous Deployment (CD) Pipeline in Azure DevOps

A dev might be more than happy to see the CI pipeline finish in the green, but code that compiles isn’t worth much if you’re not able to deploy it, so the next part is about building a second pipeline to take care of that.

Go to Builds => Releases and create a new pipeline. This time also choosing the Empty job template.

Leave Stage 1 with the defaults.
Since you started based on a build pipeline you will already have an artifact step in addition to a stage.
Make sure that the artifacts look right and are linked to the build pipeline.
Go to Stage 1 and add one helm installer and two helm charts.
Install the latest helm version.
The first helm task is a helm init. This has a twist though. Instead of using the Azure Resource Manager connection type, (used in the CI pipeline), it is required to use a Kubernetes Service Connection. (This is due to RBAC being enabled.) This screenshot shows the result, but initially you need to click +New to create the connection.
Choose Service account as authentication method.
The server URL can be found by browsing to the AKS deployment in the Azure Portal and copying the API server address. Make sure to prefix with https:// when pasting it in.

To acquire token and certificate you need to run the two kubectl commands listed:
kubectl get -n kube-system serviceaccount tiller -o yaml
kubectl get -n kube-system secret tiller-token-xyz -o yaml

The second kubectl will give you an output with two Base64-encoded strings containing the token and certificate. Copy and paste these into the form and hit OK.


Note: the UI includes the kubectl command without the namespace (kube-system) which means you will get an error that the service account cannot be found.

You can then define the second helm task. Reuse the Kubernetes Service Connection from the previous task. Make sure you choose File path as Chart Type and that the path contains /**/*.tgz at the end.


Hit Save followed by Release.


Make sure the Version aligns with your last build.


The UI should indicate the progress of the deployment.


Let it work its magic and watch things proceed to Succeeded.


You can click the task log and peek inside the helm upgrade log to see a kubectl output for verification.


Jumping back to the command line you can run kubectl get all to verify things outside the Azure DevOps UI.


The output of a working CD pipeline.


We’ve come a long way, but there are a few loose threads. These will be tied up in the next, and concluding, part.

Creating the AKS cluster (LAB)

 

Creating the AKS cluster

The first step for using the Azure CLI is logging in:

az login

If you have multiple subscriptions in Azure you might need to use

az account list
az account set --subscription xyz

to make sure you’re working on the right one. (Verify the value of the isDefault attribute to see which one the default is.)

A resource group to contain the AKS instance is also needed. Technically it doesn’t matter which location you deploy the resource group to, but the suggestion is going with a location that is supported for AKS clusters and stick with it throughout the setup.

az group create --name aksdotnetcoder --location northeurope

Next comes creation of the AKS cluster:

az aks create --name aksdotnet --resource-group aksdotnetcoder --node-count 1 --generate-ssh-keys --enable-addons http_application_routing -–aad-server-app-id x –-aad-server-app-secret x –-aad-client-app-id x -–aad-tenant-id x
--kubernetes-version 1.x.y --node-vm-size Standard_DS1_v2

Note: Some of the documentation articles online will include -–enable-rbac as an argument. This has been deprecated, and by default all AKS clusters created with Azure CLI will have RBAC enabled unless you specifically disable it by using the inverse parameter –-disable-rbac.

A specific version of Kubernetes is provided in the parameter list here even though that seems to be a low-level detail when dealing with a managed service. Kubernetes is a fast-moving target and sometimes it will be necessary to be on specific versions for specific features and/or compatibility reasons. 1.12.6 is the newest version as of writing this.

To get a list of versions run the following command:

az aks get-versions --location "location"

If this parameter is omitted a default version will be chosen, but it is not guaranteed to be the latest version. Since AKS is a managed service there may also be delays between new releases by the Kubernetes team and the time it becomes available in Azure.

Existing clusters can be upgraded to a newer version with the click of a button in the Azure Portal or through the Azure CLI:

az aks upgrade --kubernetes-version –name foo --resource-group bar

To keep costs down for a test environment only one node is used, but in production one should ramp up to at least 3 for high availability and scale. The VM size is specified to be of the DS1_v2 type. (This is also the default if you omit the parameter.) It is possible to choose lower performing SKUs, but the performance takes a severe hit which will be noticed when pulling and deployment images later in this guide, so it is not advised to do so. For production use you should have a closer look at the available VM SKUs to make a choice as to which one you want to use. (Note that the availability might vary between regions.)

This brings out another value-adding feature of AKS. In a Kubernetes cluster you have management nodes and worker nodes. Just like one would need more than one worker to distribute the load, one needs multiple managers to have high availability. AKS takes care of the management, but not only does the service abstract away the management nodes — you don’t pay for them either. You pay for the nodes, and that’s it. (Disclaimer: the author cannot guarantee the current model will be in effect for perpetuity.)

The creation of the cluster should take 10–15 minutes.

To make sure that things are in a good state verify the cluster is working before proceeding. Run the following command to retrieve the correct files for the Kubernetes tools to work:

az aks get-credentials --resource-group aksdotnetcoder --name aksdotnet --admin

Run kubectl get nodes, which should look similar to this:


YAML — The Configuration Language of Kubernetes

The subsequent sections will make heavy use of Yet Another Markup Language (YAML) for configuring k8s. XML used to be a commonly used format for configuration files, and JSON has become more frequent in the past couple of years. Kubernetes uses YAML instead for most common configurations. (Both options are demonstrated in this guide.)

YAML is not a format specific for Kubernetes, but if you aren’t familiar, it is important to understand that indentation matters.

Setting: foo
Value: bar

Is entirely different from

Setting: foo 
Value: bar

You should use spaces for this indentation; not tabs. You might get informative error messages, and in Visual Studio Code there are plugins that can provide hints, but if you get errors even though you copied and pasted the snippets here check that there haven’t been some auto-formatting reorganizing things.

Enabling RBAC for users

The -–admin parameter gave us admin access, but it hasn’t been applied permanently for RBAC for work. To do so create a file called user-rbac.yaml and paste in the following content:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: aksdotnet-cluster-admins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: user@contoso.com

Replace user@contoso.com with your user name.

This file needs to be applied by running kubectl create -f user-rbac.yaml:


In practice, you want to apply this on a group level . To do so replace the last two lines with the following (with the objectId for the group):

kind: Group
name: "guid"

Enabling RBAC for the Kubernetes Dashboard

These steps make sure you can use the kubectl command line options, but there is also a web-based dashboard one can use for managing an AKS cluster. However, this involves a service account which we have not assigned permissions to yet.

To assign these permissions create a file called dashboard-admin.yaml with the following contents:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
-kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

Apply with

kubectl create -f dashboard-admin.yaml

You can now verify that it is possible to open the Kubernetes dashboard by running

az aks browse --resource-group aksdotnetcoder --name aksdotnet

This will launch a browser with the graphical representation of your cluster:


Kubectl also allows for connecting to the dashboard (kubectl proxy) but by using the Azure CLI everything is automatically piggybacked onto the current authenticated Azure session. Notice that the address is 127.0.0.1 even though it isn’t local. That’s just some proxy tricks where the traffic is tunneled through to Azure.

It could be argued that enabling the dashboard opens another attack vector and that it would be preferable to stick to kubectl. There are valid arguments for this school of thought, and it’s also true that certain actions can only be carried out through kubectl. It is however also often faster to get an overview from a web browser, and when learning Kubernetes, it is very helpful to have a graphical representation.

RBAC for Helm and installation of the binaries into the cluster

It was mentioned previously that Helm is a sort of package manager for Kubernetes. This applies to both the software packages you build yourself, and those supplied by other parties. The RBAC model we have in place requires that we specifically need to enable support for Helm to work properly. This can be done with the following yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system

Apply with kubectl create -f helm-rbac.yaml.

It might not be apparent from the definition, but Tiller is the server component, and Helm is the client-side component. (Thus, we use tiller in the naming even though this is not a hardcoded requirement.)

Helm needs to be primed as well to be ready for later. Based on having a working cluster as verified in the previous step, helm will automagically work out where to apply its logic. You can have multiple clusters so part of the point in verifying the cluster is ok is to make sure you’re connected to the right one.

Apply the following:

Helm init –-service-account tiller
Helm repo update

The cluster should now be ready to have images deployed. Which leads to another topic — management of images.

Much like one refers to images when building virtual machines, Docker uses the same concept although slightly different on the implementation level. To get running containers inside a Kubernetes cluster you need a repository for these images. The default public repo is Docker Hub, and images stored there will be entirely suited for your AKS cluster. (In fact, third-party images will be acquired from Docker Hub for completion of the guide’s setup.) However, it is not needed to have your images available on the Internet. This requires use of a private repository instead of Docker Hub. In the Azure ecosystem this is delivered by Azure Container Registry (ACR).

This setup can be done in the Azure Portal, but for coherency the CLI is used here. It can be placed in the same resource group as the AKS cluster, but since a registry logically speaking is a separate entity a new group will be created for the registry. This means it more visibly lends itself to be re-used across clusters too.

Setting up Azure Container Registry (ACR)

Create a resource group:

az group create --name aks-acr-rg --location northeurope

Create an Azure Container Registry with the Basic SKU:

az acr create --resource-group aks-acr-rg --name aksdotnetacr --sku Basic

Since the registry name needs to be globally unique you need to come up with your own moniker for the name parameter. The SKU choice is largely driven by the need for scalability. More details can be found here: https://azure.microsoft.com/en-us/pricing/details/container-registry/

While you can now browse the contents of the registry in the portal that does not mean that your cluster can do so. As is indicated by the message upon successful creation of the ACR component we need to create a service principal that will be used by Kubernetes, and we need to give this service principal access to ACR. If you’re new to the concept of authentication in Azure AD this statement doesn’t explain what a service principal is, but you can think of it as a user account for applications in this context.

This is one of those things that are easier to do in the PowerShell ISE:

# Get the id of the service principal configured for AKS$AKS_RESOURCE_GROUP = "aksdotnetcoder"$AKS_CLUSTER_NAME = "aksdotnet"$CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)# Get the ACR registry resource id$ACR_NAME = "aksdotnetacr"$ACR_RESOURCE_GROUP = "aks-acr-rg"$ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --query "id" --output tsv)#Create role assignment
az role assignment create --assignee $CLIENT_ID --role Reader --scope $ACR_ID

The AKS cluster should now be in a state where it is ready for deployment of services, and the next steps are to create the following recipes for Azure DevOps:

  • How Azure DevOps should build the code.
  • How Azure DevOps should push the resulting artifacts to Azure Container Registry.
  • How Azure DevOps should deploy containers.

We’re on a roll here, but it feels like a natural point to take a little break before going back to Azure DevOps in the next part.