Sunday, 26 May 2024

Introducing AKS Automatic, Managed Kubernetes for Developers

Introducing AKS Automatic, Managed Kubernetes for Developers


Kubernetes, despite its many virtues for distributed systems running at scale, is notoriously complex for developers to wrangle. Devs just want to build applications and get the infrastructure they need provisioned quickly, so they can deliver and deploy faster; they don’t want to become Kubernetes experts.

This makes sense. As a technology matures, the need for simplicity comes to the forefront.

The folks at Microsoft Azure understand this trend. Their response is a new managed Kubernetes capability, AKS Automatic. AKS Automatic is intended to make working with K8s less burdensome while not sacrificing its power and flexibility.

AKS Automatic makes it easier for developers and IT staff to get up and running quickly with Azure according to sensible default configurations based on best practices. Developers can interact with the AKS Automatic feature using the Azure portal (GUI) or the Azure command line interface (CLI).

Also, AKS Automatic offers application templates for common scenarios, such as web apps, APIs or microservices. In addition, users can create their own custom templates. Based on these templates, AKS Automatic will automatically allocate the resources needed for the app under development — for example CPU, memory and storage capacity. AKS Automatic can then scale resources up or down based on demand.

Users don’t need to worry about choosing the right virtual machine size or node pool. AKS Automatic does it all on their behalf.

What Are AKS Automatic’s Benefits?

The essential benefit of AKS Automatic is that it fills in a gap in IT operations that was created by the evolution of Kubernetes in the enterprise.

“In the early days of Kubernetes and certainly when AKS initially came out, we were dealing very much with an early adopter type customer and they wanted to get in there and fiddle with it,” Sean McKenna, director of product for Azure cloud native, told The New Stack.

“Many years later, we’re dealing much more with the late majority who still want to have the power of Kubernetes and the ecosystem access and the portability and all the goodness that comes along with it. But they’re looking to us [Azure] and saying, ‘You’ve been running this at scale both for external customers plus … internal customers,’ including the likes of Office and Bing and Xbox. who are running on top of AKS. And they’re saying, ‘You probably know how to make these choices better than we do.’”

He added, “Customers are really worried that they’re gonna get it wrong … and oftentimes they do. If they end up being naive about [K8s] and set things up incorrectly, both from an infrastructure perspective or an application deployment perspective, they can easily get themselves into a bit of a pickle.”

We’ll open up the hood and see how Azure’s new managed Kubernetes tooling works. But first, let’s dig into what provisioning K8s was like with Azure previously — and how those user challenges prompted the creation of AKS Automatic, and how the new solution just might make it a bit easier for developers who deploy to Azure to do their jobs.

What Is AKS Automatic?

As the name implies, AKS Automatic automates a good deal of the work required to create and maintain K8s clusters. It’s intended to simplify working with K8s, but it still allows engineers to use the Kubernetes APIs when needed.

“Everything from infrastructure provisioning to scaling to upgrading everything is completely taken care of by the service,” Jorge Palma, principal product management lead for Azure Kubernetes Service, told The New Stack. “And even the deployment of applications and their more operational settings, instruction, budgets and other configurations that the applications themselves need to have, we take care and we help with that as well.”

In short, AKS Automatic makes it so those with limited knowledge of Kubernetes can create and manage sophisticated K8s clusters. Yet those who are well-versed in K8s can still work with the Kubernetes API via command line instructions that are executed within a terminal window using the kubectl CLI tool.

In many ways, AKS Automatic is intended to be a one-size-fits-all solution.

How Does AKS Automatic Work?

As mentioned previously, the way users interact with AKS Automatic is via the Azure portal website or the Azure CLI tool. AKS Automatic enables developers and engineers to create, augment, maintain and monitor Kubernetes clusters that will run on Azure’s cloud infrastructure. The following sections dig into the details.

Creating a Cluster

AKS Automatic is both CLI and GUI-driven. In terms of working with the Azure portal, developers and engineers work with a web page’s graphical user interface to create, configure and maintain both the Kubernetes clusters and applications that run on those clusters.

The only thing users need to do is to provide a limited number of inputs, such as the cluster name, region and resource group. AKS Automatic takes care of the rest and creates a cluster with the best configuration for the given app.

Figure 1: Developers and engineers use the Azure portal to create Kubernetes clusters on Azure.
 Zoom

Figure 1: Developers and engineers use the Azure portal to create Kubernetes clusters on Azure.

In addition to customizing the configurations of clusters and applications, users can work with application templates to support common scenarios, such as web apps, APIs or microservices, or you can create your own custom templates.

Adding an Application from Source Code

Once a cluster has been created under AKS Automatic, developers and engineers can add applications to the cluster directly from source code. This means that users don’t have to understand the details of configuration and deployment in terms of creating Linux containers and K8s pods.

Instead, users just identify the source code for the applications they intend to deploy and use the AKS Automatic GUI, CLI or Azure Resource Manager (ARM) to bind to that code in a git source code repository. (See Figure 2, callout 1.)

Figure 2: AKS Automatic can deploy applications directory from a GitHub repository using an existing Dockerfile or one created using intelligence provided by AKS Automatic.
 Zoom

Figure 2: AKS Automatic can deploy applications directory from a GitHub repository using an existing Dockerfile or one created using intelligence provided by AKS Automatic.

Then, AKS Automatic takes care of creating the application’s containers and pods. Team members can declare that AKS Automatic use an existing Dockerfile to create an application’s containers. (See Figure 2, callout 2.) As Ahmed Sabbour, a senior product manager at Microsoft, told The New Stack about AKS Automatic’s capabilities. “If the repository already has a Dockerfile, it can use that to build a container using Azure Container Registry.”

In the case where a Dockerfile does not exist, Sabbour added, AKS Automatic “can also look at the source code and ask you a couple of questions: Is it a Java app? Is this the C# app? Then, accordingly, it will generate that Dockerfile and it will also generate the Kubernetes resource files.”

AKS Automatic’s intelligence allocates the resources needed to support a company’s application. This included allocating enough CPU, memory and storage. Once an application is up and running, AKS Automatic will scale resources up or down, based on demand.

Those creating and maintaining the application don’t need to worry about choosing the optimal virtual machine size or node pool. Intelligence in AKS Automatic takes care of these details, not only when the application is deployed but also continuously, once that application is up and running.

Also, AKS Automatic keeps clusters and applications up to date with the latest Kubernetes version and security patches. In addition, developers and engineers can configure cluster upgrades and rollback if needed automatically or manually, at the command line.

Configuring Access

Once an application is configured and deployed using AKS Automatic, the application can be exposed to the internet or to other apps within the cluster using routes and ingresses. AKS will manage ingress access as well as an application’s DNS records and SSL certificates. (Figure 3, callout 1.)

Figure 3: AKS Automatic intelligence will expose an application on the internet securely according to a DNS name.
 Zoom

Figure 3: AKS Automatic intelligence will expose an application on the internet securely according to a DNS name.

Also, AKS Automatic will create a DNS name for the application automatically or developers and engineers can declare custom DNS names. (Figure 3, callout 2.) AKS Automatic supports both methods.

Deployment Safeguards

A key feature of AKS Automatic is deployment safeguard intelligence. Deployment safeguards are rules and best practice guidelines that AKS Automatic applies to the configuration and creation of a Kubernetes cluster. (See Figure 4, callout 1.)

AKS Automatic can not only flag code and configuration that don’t comply with Azure’s best practices guidelines for Kubernetes but also ensure that risky code and configuration settings are never escalated into production.

In order for an automated solution to be viable for enterprise-level use cases, enforcing best practices around Kubernetes is essential, particularly when the product is to be used by those who don’t have deep knowledge and extended experience working with applications that run under K8s.

At the enterprise-production level, the risks are too great. AKS Automatic deployment safeguards are intended to help those with a limited understanding of Kubernetes release code that is safe and secure, starting at the source code level all the way through the entire deployment pipeline.

Monitoring

AKS Automatic integrates with Azure portal, Azure Monitor, or other monitoring tools such as Prometheus.

Figure 5: AKS Automatic integrates with Azure’s and Prometheus’s reporting and alerts capabilities
 Zoom

Figure 5: AKS Automatic integrates with Azure’s and Prometheus’s reporting and alerts capabilities.

In addition, system admins can troubleshoot AKS Automatic applications at the command line using the Azure CLI or the Kubernetes kubectl command line tool.

Delegating the Basics of Working with Kubernetes

The provisioning patterns and best practices for getting a Kubernetes cluster up and running safely, efficiently and reliably are well-known to the Azure team. AKS Automatic makes it so everyday users, with limited knowledge of K8s can take full advantage of all the technology has to offer. Using AKS Automatic to delegate the basics of working with Kubernetes to an engineering team with limited expertise in the technology will reduce operational burdens and increase operational reliability.

And, most probably, as AI continues to weave its way further into the technology landscape, the evolution of AKS Automatic might make it so all that will be needed to create and deploy an application to a properly provisioned Kubernetes cluster is a text prompt saying, “Hey AKS Automatic, deploy the code in my XYZ repository in GitHub into a Kubernetes cluster running on Azure” will make the magic happen. The possibility is quite real.

Yet, while it’s true that AKS Automatic abstracts away from entry-level users a good deal, if not most of the complexity of working with Kubernetes, incidents could still arise due to inconsistencies in an application’s logic or anomalous behavior produced by services outside the control of the cluster. Thus, engineering organizations should still turn to experienced team members to address such issues when they happen. AKS Automatic will take care of most things, but not everything. 

Deploying Azure Kubernetes Service Demo Store App with Azure Open AI

 

Deploying Azure Kubernetes Service Demo Store App with Azure Open AI 

I like to share a blog series sharing my experience deploying, running and learning the technical details of the AKS Demo Store App and how it works with Azure Open AI. This experience is based on the the MS Learn article https://learn.microsoft.com/en-us/azure/aks/open-ai-quickstart?tabs=aoai

This AKS Demo Store App can be found at https://github.com/Azure-Samples/aks-store-demo/tree/main

The functionality in this app uses Azure Open AI service to generate product descriptions based on the product name and tags/keywords. This is the AI functionality of the Demo Store’s product edit page has the ability to generate the description based on the product name and keywords.


The Demo Store application architecture is composed of multiple services deployed in their own container and running in a Kubernetes deployment.

My existing AKS cluster is in version 1.23.15 and with nodepool with 2 nodes with VM size Standard_B4ms which is small compute but good enough for demo purposes. I have already other applications running in this cluster.

I login into my AKS cluster

List my nodes to confirm I am logged in and view resources.

From the Azure Portal:

Create a new k8s namespace called pets and deploy into it with this demo store app with the k8s manifest yaml file – https://raw.githubusercontent.com/Azure-Samples/aks-store-demo/main/aks-store-all-in-one.yaml. No need to configure any specific parameters in the file.

kubectl create ns pets
kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/aks-store-demo/main/aks-store-all-in-one.yaml -n pets

After deployed, I check for the k8s deployments:

Check for k8s pods in this pet namespace. All pods are running.

Check for k8s services. The store admin app is exposed to the internet with the public external-ips. The store-admin app is where you can AI generate product descriptions as shown earlier in this post.

Next deploy the AI service, open the ai-service.yaml and enter the values for the defined environment variables for the container: The AI Service generates the product description by sending a text completion prompt to the deployed Azure Open AI resource.

This can be found in the Azure Open AI service that I have already created. by going to the Keys and Endpoint blade.

And for the Azure_OpenAI_Deployment name, go to Model deployments, and I have already created a deployment I called chatgpt1 using the model gpt-35-turbo. You can create a deployment with other models such as gpt-4, text-davinci that is preferably the more newer models. A deployment in Azure Open AI is a set of configurations that indicate the base model, model version, content filter, token per minute rate limit and enabling dynamic quota. It is not enough to refer to a base model. Here are the properties of the my chosen chatgpt1 deployment:

You can read more about the base models at https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability

As a result, here is my ai-service yaml with configuration:

Let’s apply the ai-service.yaml and show the ai-service pod running:

Keep in mind that the ai-service is connecting over the Microsoft backbone and not through any virtual network in this implementation. In production scenarios, it is better to connect to your Azure Open AI service through a private endpoint.

From Azure portal we can see the deployment in the pets namespace as follows:
Deployments

Pods

Services. The ai-service is called from the store-admin. And the application Python code in the ai-service makes http calls to the Azure Open AI resource which is external to the AKS cluster.

You can see the public IP of the store-admin as 4.172.1.155 in the above screenshot and open it in the browser. You see a list of products and click on one of them to edit it.

I add keywords as follows to come up with a description that takes those keywords into the prompt.

I click Ask AI Assistant and this AI calls out to the Azure Open AI service to generate a new description.

I think this description does indeed incorporate the keywords.

By going to Monitoring Insights in AKS, you can see logs of the ai-service container when I click on the Ask AI Assistant button again. You can see it is calling out to the Azure Open AI service displaying the generated text.

Turning to the Azure Open AI resource’s metrics, I can see the times the OpenAI requests were made.

I have shown my experience of deploying and testing out the AKS Demo store app in AKS and leverage Azure OpenAI service. Although this application is simple, AKS is scalable, extensible and resilient for intensive uses of Azure Open AI service.