Sunday, 26 May 2024

Introducing AKS Automatic, Managed Kubernetes for Developers

Introducing AKS Automatic, Managed Kubernetes for Developers


Kubernetes, despite its many virtues for distributed systems running at scale, is notoriously complex for developers to wrangle. Devs just want to build applications and get the infrastructure they need provisioned quickly, so they can deliver and deploy faster; they don’t want to become Kubernetes experts.

This makes sense. As a technology matures, the need for simplicity comes to the forefront.

The folks at Microsoft Azure understand this trend. Their response is a new managed Kubernetes capability, AKS Automatic. AKS Automatic is intended to make working with K8s less burdensome while not sacrificing its power and flexibility.

AKS Automatic makes it easier for developers and IT staff to get up and running quickly with Azure according to sensible default configurations based on best practices. Developers can interact with the AKS Automatic feature using the Azure portal (GUI) or the Azure command line interface (CLI).

Also, AKS Automatic offers application templates for common scenarios, such as web apps, APIs or microservices. In addition, users can create their own custom templates. Based on these templates, AKS Automatic will automatically allocate the resources needed for the app under development — for example CPU, memory and storage capacity. AKS Automatic can then scale resources up or down based on demand.

Users don’t need to worry about choosing the right virtual machine size or node pool. AKS Automatic does it all on their behalf.

What Are AKS Automatic’s Benefits?

The essential benefit of AKS Automatic is that it fills in a gap in IT operations that was created by the evolution of Kubernetes in the enterprise.

“In the early days of Kubernetes and certainly when AKS initially came out, we were dealing very much with an early adopter type customer and they wanted to get in there and fiddle with it,” Sean McKenna, director of product for Azure cloud native, told The New Stack.

“Many years later, we’re dealing much more with the late majority who still want to have the power of Kubernetes and the ecosystem access and the portability and all the goodness that comes along with it. But they’re looking to us [Azure] and saying, ‘You’ve been running this at scale both for external customers plus … internal customers,’ including the likes of Office and Bing and Xbox. who are running on top of AKS. And they’re saying, ‘You probably know how to make these choices better than we do.’”

He added, “Customers are really worried that they’re gonna get it wrong … and oftentimes they do. If they end up being naive about [K8s] and set things up incorrectly, both from an infrastructure perspective or an application deployment perspective, they can easily get themselves into a bit of a pickle.”

We’ll open up the hood and see how Azure’s new managed Kubernetes tooling works. But first, let’s dig into what provisioning K8s was like with Azure previously — and how those user challenges prompted the creation of AKS Automatic, and how the new solution just might make it a bit easier for developers who deploy to Azure to do their jobs.

What Is AKS Automatic?

As the name implies, AKS Automatic automates a good deal of the work required to create and maintain K8s clusters. It’s intended to simplify working with K8s, but it still allows engineers to use the Kubernetes APIs when needed.

“Everything from infrastructure provisioning to scaling to upgrading everything is completely taken care of by the service,” Jorge Palma, principal product management lead for Azure Kubernetes Service, told The New Stack. “And even the deployment of applications and their more operational settings, instruction, budgets and other configurations that the applications themselves need to have, we take care and we help with that as well.”

In short, AKS Automatic makes it so those with limited knowledge of Kubernetes can create and manage sophisticated K8s clusters. Yet those who are well-versed in K8s can still work with the Kubernetes API via command line instructions that are executed within a terminal window using the kubectl CLI tool.

In many ways, AKS Automatic is intended to be a one-size-fits-all solution.

How Does AKS Automatic Work?

As mentioned previously, the way users interact with AKS Automatic is via the Azure portal website or the Azure CLI tool. AKS Automatic enables developers and engineers to create, augment, maintain and monitor Kubernetes clusters that will run on Azure’s cloud infrastructure. The following sections dig into the details.

Creating a Cluster

AKS Automatic is both CLI and GUI-driven. In terms of working with the Azure portal, developers and engineers work with a web page’s graphical user interface to create, configure and maintain both the Kubernetes clusters and applications that run on those clusters.

The only thing users need to do is to provide a limited number of inputs, such as the cluster name, region and resource group. AKS Automatic takes care of the rest and creates a cluster with the best configuration for the given app.

Figure 1: Developers and engineers use the Azure portal to create Kubernetes clusters on Azure.
 Zoom

Figure 1: Developers and engineers use the Azure portal to create Kubernetes clusters on Azure.

In addition to customizing the configurations of clusters and applications, users can work with application templates to support common scenarios, such as web apps, APIs or microservices, or you can create your own custom templates.

Adding an Application from Source Code

Once a cluster has been created under AKS Automatic, developers and engineers can add applications to the cluster directly from source code. This means that users don’t have to understand the details of configuration and deployment in terms of creating Linux containers and K8s pods.

Instead, users just identify the source code for the applications they intend to deploy and use the AKS Automatic GUI, CLI or Azure Resource Manager (ARM) to bind to that code in a git source code repository. (See Figure 2, callout 1.)

Figure 2: AKS Automatic can deploy applications directory from a GitHub repository using an existing Dockerfile or one created using intelligence provided by AKS Automatic.
 Zoom

Figure 2: AKS Automatic can deploy applications directory from a GitHub repository using an existing Dockerfile or one created using intelligence provided by AKS Automatic.

Then, AKS Automatic takes care of creating the application’s containers and pods. Team members can declare that AKS Automatic use an existing Dockerfile to create an application’s containers. (See Figure 2, callout 2.) As Ahmed Sabbour, a senior product manager at Microsoft, told The New Stack about AKS Automatic’s capabilities. “If the repository already has a Dockerfile, it can use that to build a container using Azure Container Registry.”

In the case where a Dockerfile does not exist, Sabbour added, AKS Automatic “can also look at the source code and ask you a couple of questions: Is it a Java app? Is this the C# app? Then, accordingly, it will generate that Dockerfile and it will also generate the Kubernetes resource files.”

AKS Automatic’s intelligence allocates the resources needed to support a company’s application. This included allocating enough CPU, memory and storage. Once an application is up and running, AKS Automatic will scale resources up or down, based on demand.

Those creating and maintaining the application don’t need to worry about choosing the optimal virtual machine size or node pool. Intelligence in AKS Automatic takes care of these details, not only when the application is deployed but also continuously, once that application is up and running.

Also, AKS Automatic keeps clusters and applications up to date with the latest Kubernetes version and security patches. In addition, developers and engineers can configure cluster upgrades and rollback if needed automatically or manually, at the command line.

Configuring Access

Once an application is configured and deployed using AKS Automatic, the application can be exposed to the internet or to other apps within the cluster using routes and ingresses. AKS will manage ingress access as well as an application’s DNS records and SSL certificates. (Figure 3, callout 1.)

Figure 3: AKS Automatic intelligence will expose an application on the internet securely according to a DNS name.
 Zoom

Figure 3: AKS Automatic intelligence will expose an application on the internet securely according to a DNS name.

Also, AKS Automatic will create a DNS name for the application automatically or developers and engineers can declare custom DNS names. (Figure 3, callout 2.) AKS Automatic supports both methods.

Deployment Safeguards

A key feature of AKS Automatic is deployment safeguard intelligence. Deployment safeguards are rules and best practice guidelines that AKS Automatic applies to the configuration and creation of a Kubernetes cluster. (See Figure 4, callout 1.)

AKS Automatic can not only flag code and configuration that don’t comply with Azure’s best practices guidelines for Kubernetes but also ensure that risky code and configuration settings are never escalated into production.

In order for an automated solution to be viable for enterprise-level use cases, enforcing best practices around Kubernetes is essential, particularly when the product is to be used by those who don’t have deep knowledge and extended experience working with applications that run under K8s.

At the enterprise-production level, the risks are too great. AKS Automatic deployment safeguards are intended to help those with a limited understanding of Kubernetes release code that is safe and secure, starting at the source code level all the way through the entire deployment pipeline.

Monitoring

AKS Automatic integrates with Azure portal, Azure Monitor, or other monitoring tools such as Prometheus.

Figure 5: AKS Automatic integrates with Azure’s and Prometheus’s reporting and alerts capabilities
 Zoom

Figure 5: AKS Automatic integrates with Azure’s and Prometheus’s reporting and alerts capabilities.

In addition, system admins can troubleshoot AKS Automatic applications at the command line using the Azure CLI or the Kubernetes kubectl command line tool.

Delegating the Basics of Working with Kubernetes

The provisioning patterns and best practices for getting a Kubernetes cluster up and running safely, efficiently and reliably are well-known to the Azure team. AKS Automatic makes it so everyday users, with limited knowledge of K8s can take full advantage of all the technology has to offer. Using AKS Automatic to delegate the basics of working with Kubernetes to an engineering team with limited expertise in the technology will reduce operational burdens and increase operational reliability.

And, most probably, as AI continues to weave its way further into the technology landscape, the evolution of AKS Automatic might make it so all that will be needed to create and deploy an application to a properly provisioned Kubernetes cluster is a text prompt saying, “Hey AKS Automatic, deploy the code in my XYZ repository in GitHub into a Kubernetes cluster running on Azure” will make the magic happen. The possibility is quite real.

Yet, while it’s true that AKS Automatic abstracts away from entry-level users a good deal, if not most of the complexity of working with Kubernetes, incidents could still arise due to inconsistencies in an application’s logic or anomalous behavior produced by services outside the control of the cluster. Thus, engineering organizations should still turn to experienced team members to address such issues when they happen. AKS Automatic will take care of most things, but not everything. 

No comments:

Post a Comment