Sunday, 1 January 2023

Using AWS Systems Manager for Patch Management

 

Using AWS Systems Manager for Patch Management

AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources.AWS Systems Manager – Gain Operational Insights and Take Action

In this activity guide, we cover the creation and management of Infrastructure using AWS SSM.

Amazon Athena

 

 Amazon Athena

Amazon Athena is an interactive query service offered by AWS that makes it easy to analyze the data in S3 using standard SQL. hence, the Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Athena is easy to use which points to your data in S3, defines the schema, and starts querying using SQL. Mostly the results are delivered within seconds which makes it easy for everyone with SQL skills to quickly analyze large-scale datasets.

In this activity guide, you will learn about how to analyze data sets through an Amazon Athena.

Install CloudWatch Agent on EC2 Instance and View CloudWatch Metrics

 

Install CloudWatch Agent on EC2 Instance and View CloudWatch Metrics

Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.Cloud Watch

In this activity guide, we cover Step by step instruction for installing CloudWatch Agent on EC2 instance for Metrics Visualization

Configure Amazon CloudWatch to Notify Change In EC2 CPU Utilization

 

Configure Amazon CloudWatch to Notify Change In EC2 CPU Utilization

Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.

metric alarm watches a single CloudWatch metric or the result of a math expression based on CloudWatch metrics. The alarm performs one or more actions based on the value of the metric or expression relative to a threshold over several time periods. The action can be an Amazon EC2 action, an Auto Scaling action, or a notification sent to an Amazon SNS topic.
Amazon CloudWatch - Application and Infrastructure Monitoring

In this activity guide, we cover Step by step instructions on how to create CloudWatch Alarms to notify when CPU Utilization of an Instance exceeds the threshold.

Enable CloudTrail and store Logs in S3

 

Enable CloudTrail and store Logs in S3

Amazon S3 is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Amazon S3. CloudTrail captures a subset of API calls for Amazon S3 as events, including calls from the Amazon S3 console and from code calls to the Amazon S3 APIs.

If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for Amazon S3.
AWS CloudTrail – Amazon Web Services

In this activity guide, you will learn to implement step-by-step instructions to create a trail and store logs in an S3 bucket.

Load Balancer

 

What Is Azure Load Balancer?

Azure load balancer allows you to distribute traffic to your backend virtual machines. An Azure load balancer provides high availability for your application. The Azure load balancer is a fully managed service itself.

Load Balancer

Why Choose Azure Load Balancer?

With Standard Load Balancer, you can scale your applications and create highly available services. Load balancer supports both inbound and outbound scenarios. A load balancer provides low latency and high throughput and scales up to millions of flows for all TCP and UDP applications.
Some of the key scenarios that you can accomplish using Standard Load Balancer include:
  • Load balance internal and external traffic to Azure virtual machines.
  • Increase availability by distributing resources within and across zones.
  • Use health probes to monitor load-balanced resources.
  • Employ port forwarding to access virtual machines in a virtual network by public IP address and port.
  • Standard Load Balancer provides multi-dimensional metrics through Azure Monitor. These metrics can be filtered, grouped, and broken out for a given dimension. They provide current and historic insights into the performance and health of your service. Resource Health is also supported.
  • Load balance services on multiple ports, multiple IP addresses, or both.

What are the types of load balancer in Azure?

In Azure, you can create two types of the load balancer

  • Public load balancer
  • Internal/ private load balancer

Public Load Balancer: A public load balancer can be used to load balance internet traffic to virtual machines. It can provide outbound connections for virtual machines (VMs) inside your virtual network.

Internal/ Private Load Balancer: An internal (or private) load balancer is used to balance traffic from within a virtual network.

internal load balancer

Along with load balancer, there are two pricing tiers available Basic and Standard

Basic: Basic tier load balancer provides basic features and is restricted to some limits for backend pool size it is restricted to only 300 instances, it’s restricted to a single availability set and it only supports multiple frontends for inbound traffic.

Standard: Standard tier load balancer is generally available and offers higher-scale and new features. It is a paid-for feature using a complex set of consumption-based charges and the Basic tier continues to be free. Also, we can scale out to 1000 instances and can span any virtual machine in a single virtual network, including blends of scale sets, availability sets, and machines.

To know more features of Basic and Standard tier check the official Azure Load Balancer SKUs

Features Of Azure Load Balancer

Azure load balancer

  • Load Balancing: Azure load balancer uses a 5-tuple hash that contains source IP, source port, destination IP, destination port, and protocol.
  • Outbound connection: All the outbound flows from a private IP address inside our virtual network to public IP addresses on the Internet can be translated to a frontend IP of the load balancer.
  • Automatic reconfiguration: The load balancer is able to reconfigure itself when it scales up or down instances on the basis of conditions. So, if more virtual machines are added into the backend pool, automatically load balancer will reconfigure.
  • Application agnostic and transparent: It doesn’t directly interact with TCP or UDP  protocols. We can route the traffic based on URL or multi-site hosting
  • Health probes: When any failed virtual machines in a load balancer are recognized by the health probe in the backend pool then it stop routing the traffic to that particular failed virtual machine. It can configure a health probe to determine the health of the instances in the backend pool.
  • Port forwarding: The load balancer supports port forwarding ability if we have a pool of web servers, and we don’t want to attach a public IP address for every web server in that pool.

Steps To Create An Azure Load Balancer :

Step 1) Log in to Azure Portal (Please make sure you have a subscription before doing all this. If you created a free account for the first time, you’ll already have a FREE TRIAL subscription for 1 month).

tep 2) On top use the search bar and search for Load Balancer and then choose Load Balancer

Step 3) On the top left-hand side of the screen, click Add

add


Step 4) In the Basics tab of the Create load balancer page, enter or select the following information, accept the defaults for the remaining settings, and then select Review + create.

  • Subscription: Select your subscription.
  • Resource group: Select Create new and type LBresourcegroup in the text box.
  • Name: Enter myLoadBalancer
  • Region: Select East US
  • Type: Select Public.
  • SKU: Select Basic
  • Public IP address: Select Create new. If you have an existing Public IP you would like to use, select Use existing.
  • Public IP address name: Type myPublicIP in the text box.

create load balancer


Step 5) Now Click On Create and check your Load Balancer Is Created.

lb creation


Task 2: Create a Virtual Network

Step 1) On the upper-left side of the screen, select Create a resource > Networking > Virtual network or search for Virtual network in the search box.

Step 2) In Create virtual network, enter or select the following information in the Basics tab:

create vnet

 

Step 3) Select the Next: IP Addresses button at the bottom of the page. In the IP Addresses tab, enter this information:

create vnet

Step 4) Under Subnet name, select the word default. In Edit subnet, enter this information and select Save then select the Review + create tab.

create suubnet


Task 3: Create a Backend Pool

A backend address pool contains the IP addresses of the virtual (NICs) connected to the load balancer.

Step 1) Select All services in the left-hand menu, select All resources, and then select myLoadBalancer from the resources list.

Step 2) Under Settings, select Backend pools, then select Add.

backend pool


Step 3) In the Add a backend pool page, enter the following information and then select Add.

create backedend pool


Task 4: Create a Health Probe

The load balancer monitors the status of your app with a health probe. The health probe adds or removes VMs from the load balancer based on their response to health checks.

Step 1) Select All services in the left-hand menu, select All resources, and then select myLoadBalancer from the resources list.

Step 2) Under Settings, select Health probes, then select Add.

add a health probe


Step 3) Enter the below information in the Add health probe page and then select OK.

create health probe


Task 5: Create a Load Balancer Rule

A load balancer rule is used to define how traffic is distributed to the VMs. You define the frontend IP configuration for the incoming traffic and the backend IP pool to receive the traffic. The source and destination ports are defined in the rule.

Step 1) Select All services in the left-hand menu, select All resources, and then select myLoadBalancer from the resources list.

Step 2) Under Settings, select Load balancing rules, then select Add.

create load balancing rule

Step 3) Use these values to configure the load-balancing rule and then select OK.

add load balancing rule


Task 6: Setup Two New windows VM

Now you have to create two different ‘Windows Server’ Virtual Machines and to do so you can follow the steps given in this Documentation.

Note: Create both the VMs in an Availability Set and Select the previously created VNet named “myVNet” under the Networking tab.

create VM


Task 7: Install IIS for Testing

Install IIS Web Server on the virtual machines to test the load balancer.

Step 1) Run the following command in the Azure Cloud Shell (PowerShell) to install IIS on the virtual machine. Change the Location and Resource Group parameter according to the VM in which you have deployed:

Set-AzVMExtension `
-ResourceGroupName LBresourcegroup `
-ExtensionName IIS `
-VMName myVM1 `
-Publisher Microsoft.Compute `
-ExtensionType CustomScriptExtension `
-TypeHandlerVersion 1.4 `
-SettingString ‘{“commandToExecute”:”powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \”C:\\inetpub\\wwwroot\\Default.htm\” -Value $($env:computername)”}’ `
-Location EastUS

install IIS

Note: Change the -VMName line to “myVM2” for installing IIS on the second Virtual Machine.


Task 8: Add Virtual Machines to the Backend Pool

Step 1) Select All services in the left-hand menu, select All resources, and then select myLoadBalancer from the resources list.

Step 2) Under Settings, select Backend pools, then select myBackendPool.

Step 3) Select the myVNet in the Virtual Network and Select Virtual machines in Associated to.

add vm to backend pool


Step 4) In the Virtual machines section, select + Add and then select the newly created both VMs, Select Save.

add vms


Task 9: Test the Load Balancer

Step 1) Find the public IP address for the load balancer on the Overview screen of the Load Balancer.

test load balancer

Step 2) Copy the public IP address, and then paste it into the address bar of your browser. Check the response. A valid response verifies that the load balancer was successfully created and can successfully connect with the backend VMs.

from vm1


Refresh the browser multiple times and you should see connections to both myVM1 and myVM2.

from vm2

And hence, this confirms the load balancer is working as expected.


 

Event-Driven Architectures Using AWS Lambda, SES, SNS, & SQS

 

Event-Driven Architectures Using AWS Lambda, SES, SNS, & SQS

AWS Lambda is a service by AWS which lets you run your code without managing the servers, you pay only for the compute time you consume. With Lambda service, you can run code for virtually any type of app or backend services, all with zero administration. here you just have to upload your code and Lambda takes care of everything required to run and scale your code with high availability and durability. Also, you can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile application.

Amazon SQS is a fully configured message queue service that enables you to decouple and scale multiple microservices, distributed systems, and server-less applications. Using the SQS, you can send, receive, and store messages between the software components at any volume, without losing messages or requiring other services to be available.

Amazon SNS is a fully managed messaging service for both types of communication application-to-application (A2A) and application-to-person (A2P).

An event-driven architecture uses events to trigger and communicate between decoupled services and it acts as common modern applications built with microservices. An event is a change in state, like an item being placed in a shopping cart on an e-commerce website. Events can either carry a state or events can be identifiers.

It has three key components: event producers, event routers, and event consumers. A producer publishes an event to the router, which filters and pushes the events to users. Producer services and consumer services are decoupled, which allows users to scale, update, and deploy independently.

In this activity guide, you will learn about Event-driven architecture.