Thursday, 15 June 2023

Introduction to Amazon Elastic Container Service (ECS)

 Amazon Elastic Container Service (ECS), also known as Amazon EC-2 Container Service, is a managed service that allows users to run Docker-based applications packaged as containers across a cluster of EC2 instances. Running simple containers on a single EC-2 instance is simple but running these applications on a cluster of instances and managing the cluster is being administratively heavy process. With ECS, Fargate launch type, the load, and responsibility of managing the EC2 cluster is transferred over to the AWS and you can focus on application development rather than management of your cluster architecture.

AWS  Fargate is the AWS service that allows ECS to run containers without having to manage and provision resources required for running these applications.

Terminology:

  • Container: A container is a package that holds an application and everything(dependencies, libraries etc.) the application requires to run. Containers are independent of the underlying operating system and hence container applications are fairly portable, flexible, and scalable. This ensures the application will run always as expected irrespective of the system and environment in with a container is run.
  • Docker: Docker is a software that facilitates and automates installation and deployment of applications inside Linux containers.
  • Cluster: A logic group of EC2 instances running as a single application.
  • Container Instance: Each EC2 in an ECS Cluster is called a container instance.

Autoscaling with ECS:

Autoscaling is an AWS Compute service that allows your application to scale up/down according to EC2 instance CPU usage or some other criteria (Autoscaling policies) set by the user.   For example: For a web application, you can set an Autoscaling Policy like, when CPU usage exceeds 80% for 5 minutes add another EC2 instance with the same configurations. This will add another instance behind the ELB and allow requests divided into 2 EC2 instances now. Earlier, Autoscaling feature was only provided with AWS EC2 service, in the year 2016, it started to support ECS clusters as well.

  • Autoscaling for ECS Clusters can be set up from AWS-CLI, AWS-Console and AWS SDKs as well.
  • You can choose the number of starting tasks for the application, attach one or more policies to the Autoscaling Group and AWS handles the rest.
  • Autoscaling for ECS also manages Multi-AZ presence of your ECS cluster. That is, you can attach policies to have a minimum of one or more instances in each Availability Zone to make your application highly available.

Features of ECS:

  • Removes the need for users to manage your own cluster management system by interacting with AWS Fargate.
  • Allows seamless deployment of container-based applications. This can be scheduled or done by simple API calls.
  • AWS ECS takes care of the management and monitoring of the application cluster.
  • Amazon ECS is region specific. This means that a cluster can only scale up/down (start-up or shut down container instances) in a single region.
  • Clusters are dynamically scalable.

Launch types: ECS can be launched in two following modes:

  1. Fargate launch: As discussed above, fargate launch type, takes most of the responsibility from the users and takes in only the basic inputs like the CPU-type, memory and IAM policies from the user to run the application cluster.
  2. EC2 Launch: This is a more customizable launch type. Users are responsible for the number of instances in the cluster, scaling their cluster, and more. This allows you to be more in control of your clusters which may be required for security reasons.

Irrespective of the launch type AWS Cloudwatch is enabled for ECS and can be used for monitoring or to create alarms and events as required.

Advantages:

  • Scalability: ECS automatically scales your applications based on demand, allowing you to easily handle changes in traffic or workload.
  • High availability: ECS provides built-in availability and fault tolerance, ensuring that your applications are always up and running.
  • Cost-effective: ECS enables you to optimize your infrastructure costs by scaling resources based on demand and only paying for what you use.
  • Integration: ECS integrates with other AWS services such as Amazon ECR, AWS Fargate, Amazon CloudWatch, and AWS IAM.
  • Security: ECS provides a secure environment to run your applications, with features such as IAM roles for tasks, VPC isolation, and encryption at rest.

ECS has some limitations to consider. For example, it requires a certain level of expertise to set up and configure the service. Additionally, it may not be suitable for applications that require a high level of customization or specialized networking requirements.

List of available commands:

AWS provides a set of commands that can be run on AWS-CLI (AWS Command Line Interface) to manage your services. Much like you’d manage from your AWS Console. Following is a list of commands that can be used for managing the AWS ECS service.

  • create-capacity-provider:  Used to create a new capacity provider. Capacity providers comprise of a name, an Auto Scaling group along with termination and scaling settings.
  • create-cluster: Creates a new AWS ECS Cluster.
  • create-service: Runs and maintains a desired number of tasks as specified by the given task definition.
  • create-task-set: Create task-set in AWS ECS cluster or a Service.
  • delete-account-setting: Disables an account setting for an IAM user, role or the root user of the account.
  • delete-attributes: Deletes one or more custom attributes of ECS cluster.
  • delete-cluster: Deletes an entire specified cluster.
  • delete-service: Deletes the specified service.
  • delete-task-set: Deletes the specified task-set.
  • deploy: Deploys new task definition to the specified ECS service.
  • deregister-container-instance: Deregisters the specified container instance from its ECS cluster.
  • deregister-task-definition: Deregisters a task definition from ECS Service.
  • describe-capacity-providers: Describes one or more capacity providers.
  • describe-clusters: Describes one or more of ECS Clusters.
  • describe-container-instances: Describes ECS container instances. Gives metadata about remaining resources on each of these instances.
  • describe-services: Describes Services running in specified cluster.
  • describe-task-definition: Describe task definition.
  • describe-task-sets: Describes task sets of the specified ECS service or cluster.
  • describe-tasks: Describes specified task(s).
  • discover-poll-endpoint: Gives endpoint of AWS ECS agent to poll for updates.
  • list-account-settings: Gives account settings for the specified principal.
  • list-attributes: List attributes for specified ECS resource.
  • list-clusters: Gives a list of existing clusters.
  • list-container-instances: Gives a list of container instances in specified cluster.
  • list-services: Gives a list of services running in specified cluster.
  • list-tags-for-resource: Gives tags associated with specified resource.
  • list-task-definition-families: List all the task definition families registered to your account.
  • list-task-definitions: List task definitions registered to your account.
  • list-tasks: Gives tasks running in the specified cluster.
  • put-account-setting: Used to modify account settings for IAM users, roles or the root user.
  • put-account-setting-default: Used to modify an account setting for all IAM users on an account for whom no individual account setting has been specified.
  • put-attributes: Create/Update attribute for specified ECS resource.
  • put-cluster-capacity-providers: Modify capacity providers for a cluster.
  • register-container-instance: Registers container instance into the specified cluster.
  • register-task-definition: Registers task definition from the specified family and container definitions.
  • run-task: Starts a new task from a task definition.
  • start-task: Starts a new task from the task definition on the specified container instance or instances.
  • stop-task:  Stop specified task. (Note that any tags associated with the task are deleted.)
  • submit-attachment-state-changes: Sent to acknowledge that a container changed states.
  • submit-container-state-change: Sent to acknowledge that an attachment changed states.
  • submit-task-state-change: Sent to acknowledge that a task changed states.
  • tag-resource:  Adds specified tags to the resource whose RN is supplied.
  • untag-resource: Removes specified tags from a resource.
  • update-cluster-settings: Modify the cluster settings.
  • update-container-agent: Updates the Amazon ECS container agent on a specified container instance.
  • update-container-instances-state: Modifies the status of an Amazon ECS container instance.
  • update-service: Modifies the parameters of a service.
  • update-service-primary-task-set: Modifies which task-set in a service is the primary task-set. Any parameters that are updated on the primary task set in a service will transition to the service.
  • update-task-set: Modifies a task set.
  • wait: Wait until a particular condition is satisfied. Each sub-command polls an API until the listed requirement is met.

Create and Configure the Auto Scaling Group in EC2

 Pre-requisite:- AWS and EC2

Auto Scaling is an Amazon Web Services it allows instance scalable when the traffic increases or CPU load increases. Auto-scaling is a service monitoring all instances that are configured into the Auto Scaling group and it ensures that loads are balanced in all instances. 

Depending on the load scaling group increase the instance according to the configuration. When we create the Auto Scaling Group we configured the Desired capacity, Minimum capacity, or Maximum capacity and also configured CPU utilization. If CPU utilization increases by 60% in all instances then one more instance create and if CPU utilization decreases by 30% in all instances then terminate one Instance. These are setting totally up to us what is our requirement. If any Instance fails due to any reason then the Scaling group maintains the Desired capacity and starts another instance.

The Auto Scaling group follows Horizontal Scaling. This service is very important for us nowadays because we do not need to create new instances manually and do not require monitoring manually.

Steps to Setup the Auto Scaling Group in EC2

Step 1: Click on the All Services.

Step 2: Click on the EC2.

ec2

 

Step 3: Scroll Down and click on the Launch Templates and click on the Create launch template

launch ec2

 

Step 4: Type the Template name.

create template

 

Step 5: Select the Amazon Machine Image.

select AMI

 

Step 6: Select the Instance Type and Key pair.

select instance type

 

Step 7: Select the Security Group or Create the new one.

security group

 

Step 8: Click on the Create Launch Template.

launch instance

 

Step 9: Now you can see the template is created. Now, scroll down and click on the Auto Scaling Groups.

autoscaling

 

Step 10: Click on the Create Auto Scaling group.

create autoscaling group

 

Step 11: Type the Auto Scaling group name.

 Auto Scaling group name

 

Step 12: Select your Template.

create template

 

Step 13: Select the VPC or go with the default VPC and also select the Availability zone.

select VPC

 

Step 14: Configure the Group size and Scaling policies.

Select as per your requirement:

  • Desired: 4
  • Minimum: 4
  • Maximum: 8
configure size

 

Step 15: Select the Target tracking scaling policy.

scaling policies

 

Step 16: Click on the Create Auto Scaling Group.

 Create Auto Scaling Group

 

  • Now you can see the Auto Scaling is creating and it is also creating the desired state of the EC2 Instance
Auto Scaling is creating

 

  • We selected the Desired state equal to 4 and you can see the 4 Instance is Running
Auto Scaling is created

Amazon Web Services – Scaling Amazon EC2

 Scalability refers to the capacity of a software solution to manage rising workloads. In simple terms, it is the ability of a system to readily add extra processing resources to handle the increased loads. 

Scaling Amazon EC2 means you start with the resources you require at the time of starting your service and build your architecture to automatically scale in or out, in response to the changing demand. As a result, you only pay for the resources you really utilize. You don’t have to be concerned about running out of computational power to satisfy your consumer’s demand.

Let’s look at an example. Let’s say you try to access a website, and it works flawlessly with no latency. Now, at peak times, the same website doesn’t load or gets timed out often. This is due to the fact that the website started receiving more requests than it could manage.

Scaling-Amazon-EC2-Used-and-Unused-Capacity

That’s the point where Amazon EC2 Autoscaling comes into the picture. You may use Amazon EC2 Auto Scaling in order to add or delete Amazon EC2 instances with respect to changes in your application demand. You can maintain a higher feeling of application availability by dynamically scaling your instances in and out as needed. 

You can use three scaling techniques within Amazon EC2 Auto Scaling i.e. Dynamic Scaling, Predictive Scaling, and Scheduled Scaling. They are explained in detail below:

1. Dynamic Scaling adapts to changing environments and responds with the EC2 instances as per the demand. It helps the user to follow the demand curve for the application, which ultimately helps the maintainer/user to scale the instances ahead of time. Target tracking scaling policies, for example, may be used to choose a loaded statistic for your application, such as CPU use. Alternatively, you might use Application Load Balancer’s new “Request Count Per Target” measure, which is a load balancing option for the Elastic Load Balancing service. After that, Amazon EC2 Auto Scaling will modify the number of EC2 instances as needed to keep you on track. 

2. Predictive Scaling helps you to schedule the right number of EC2 instances based on the predicted demand. You can use both dynamic and predictive scaling approaches together for faster scaling of the application. Predictive Scaling forecasts future traffic and allocates the appropriate number of EC2 instances ahead of time. Machine learning algorithms in Predictive Scaling identify changes in daily and weekly patterns and automatically update projections. In this way, the need to manually scale the instances on particular days is relieved. 

3. Scheduled Scaling, as the name suggests allows you to scale your application based on the scheduled time you set. For e.g. A coffee shop owner may employ more baristas on weekends because of the increased demand and frees them on weekdays because of reduced demand.

Computing power is a programmed resource in the cloud, so you may take a more flexible approach to scale your applications. When you add Amazon EC2 Auto Scaling to an application, you may create new instances as needed and terminate them when they’re no longer in use. In this way, you only pay for the instances you use, when they’re in use.

Some Ways to scale Amazon EC2:

  • Horizontal Scaling: Horizontal scaling involves adding more instances to your application to handle increased demand. This can be done manually by launching additional instances, or automatically using Amazon EC2 Auto Scaling, which monitors your application’s workload and adds or removes instances based on predefined rules.
  • Vertical Scaling: Vertical scaling involves increasing the resources of existing instances, such as CPU, memory, or storage. This can be done manually by resizing instances, or automatically using Amazon EC2 Auto Scaling with launch configurations that specify instance sizes based on the workload.
  • Load Balancing: Load balancing involves distributing incoming traffic across multiple instances to improve performance and availability. Amazon Elastic Load Balancing (ELB) is a service that automatically distributes incoming traffic across multiple instances in one or more Availability Zones.
  • Multi-Availability Zone Deployment: Multi-Availability Zone (AZ) deployment involves launching instances in multiple AZs to improve availability and fault tolerance. Amazon EC2 Auto Scaling can be used to automatically launch instances in additional AZs to maintain availability in case of an AZ outage.
  • Containerization: Containerization involves using containers to package and deploy applications, making them more portable and easier to manage. Amazon Elastic Container Service (ECS) is a service that makes it easy to run, stop, and manage Docker containers on a cluster of EC2 instances.

Benefits of Auto Scaling EC2:

Scaling as discussed should be implemented in an EC2 instance, in order to achieve more flexibility if the demand for application increases. Let’s discuss what are the detailed benefits of Auto Scaling an EC2. 
A method to make the most of AWS Cloud is to incorporate Amazon EC2 Auto Scaling into your application design. The applications benefit the following when you use Amazon EC2 Auto Scaling:

  1. Amazon EC2 Auto Scaling ensures that your application has enough capacity to handle current traffic demand at all times. This means your application can add or remove new and old instances respectively with respect to the demand of the application. The feature of auto adding and terminating the instances as per demand is termed as Better Availability of the application.
  2. Suppose an instance, becomes unhealthy by the time and is in use despite this fact. The chances of its crashing increase. Here comes another use case of Auto Scaling EC2. It will recognize which instance is not healthy or in technical terms which instance is slow, low efficient, etc, and automatically terminated the instance and replace it with a brand new instance. Furthermore, a user can employ several availability zones with Amazon EC2 Auto Scaling. If one zone goes down or crashes, EC2 Auto Scaling compensates the same by launching instances in other zones. In this way, the traffic is can be migrated to the other zone in which new instances are added in order to manage traffic till the crashed zone gets healthy again.
  3. Auto Scaling is highly cost-efficient and must be employed if you’re not sure about the traffic that your application will be receiving. As per the need, Amazon EC2 Auto Scaling can dynamically raise and reduce capacity. The user can save money by this as only according to the demand, new instances will be created and will be charged. As soon as the traffic to the application reduces, some instances get terminated and in this way, you use and pay for the instance that you really need.

Limitations of EC2 Autoscaling:

There are several limitations to consider when using Amazon EC2 Auto Scaling:

  1. Number of instances: Amazon EC2 Auto Scaling can support a maximum of 500 instances per Auto Scaling group.
  2. Instance health checks: Auto Scaling uses Amazon EC2 instance health checks to determine the health of an instance. If an instance fails a health check, Auto Scaling will terminate it and launch a new one. However, this process can take some time, which can impact the availability of your application.
  3. Scaling policies: Auto Scaling allows you to set scaling policies based on CloudWatch metrics, but these policies can be complex to configure and may not always scale your application as expected.
  4. Application dependencies: If your application has dependencies on other resources or services, such as a database or cache, it may not scale as expected if those resources become overloaded or unavailable.
  5. Cost: Using Auto Scaling can increase the cost of running your application, as you may be charged for the additional instances that are launched.

Overall, it’s important to carefully consider the limitations of Amazon EC2 Auto Scaling and how they may impact your application when deciding whether to use this service.

How does EC2 Autoscaling work?

Amazon EC2 Autoscaling provides the liberty to automatically scale the instances as per the demand. Even if some problems are detected, the model replaces the unhealthy instances with ones that are fully functional. To automate fleet management for EC2 instances, Amazon EC2 Auto Scaling will perform three major functions:

  • Balancing the capacities across different Availability zones: If your application has three availability zones, Amazon EC2 Autoscaling can help you balance the number of instances across the three zones. As a result, each zone receives no more or fewer instances than the others, resulting in a balanced distribution of traffic and burden.
  • Replacing and Repairing unhealthy instances: If the instances fail to pass the health check, Autoscaling replaces them with healthy instances. As a result, the problem of instances crashing is reduced, and you won’t have to manually verify their health or replace them if they’re determined to be unhealthy.
  • Monitoring the health of instances: While the instances are running, Amazon EC2 Auto Scaling ensures that they are healthy and that traffic is evenly allocated among them. It does health checks on the instances on a regular basis to see if they’re experiencing any issues.

Amazon-Web-Services-Scaling-Amazon-EC2


Bash Script – Features

 These days, there are various shell writing options available out there, and each one of them has its own unique features. Just as it does, Bash Shell contains some of the best features that make it more productive and usable. For each project, there are specific requirements for the features, and the developers choose the shell according to those specific features. Therefore, it is important that you know about the features of Bash  Scripting before you start using it. Some of the key features of Bash Shell Scripting are provided here

1. Compatibility

Bash goes with .sh as it comes out of the original UNIX Bourne shell and is included with the excellent and useful features of Korn and C Shell, such as cheat guide, function, control, etc.

2. Persuasion

In expanding the single-character command-line options (i.e., -b, -c, -4, etc.), which can be configured using the shell set in Command, there are a few multi-character options. (i.e., -debugger, -help, -login, etc.) that you can use with Bash Shell Scripting

3. Bash Launch Files

Bash launcher files are read and used scripts when Bash is launched. Each file has its own unique application, and a collection of these files is used to create an environment.

4. Interacting shells

The interactive shell usually reads from the user terminal and writes. Input and output are connected to the terminal. Bash interactive behavior is initiated when the bash command is requested without optional arguments, except when the option is to learn from the song or when the shell is invited to learn from the standard input, which allows you to set location parameters.

5. Conditionals

Conditional statements are supported and can be implemented by – [[ integrated command check and [built-in commands.

6. Shell Arithmetic

The shell allows for the exploration of arithmetic expressions, such as a shell extension.

7. Aliases

An Alias allows a character unit or set of commands that instead of a word that can be easily used as a command which maintains a list of shell nicknames that can be set with the names and instructions of the Alias. Also, It reduces tasks and consumes less time.

8. Lists

Bash uses the same one-sided members using which you can easily refer to and manage the data list.

9. Document Stack

The stack list is a list that shows a recent visit. The built-in command pushd adds references to the stack as it replaces the current index, and the built-in popd removes the references mentioned in the stack and converts the current directory back to the extracted directory. To display the content, the built-in dirs are removed.

Wednesday, 14 June 2023

Difference between AWS Cloudwatch and AWS Cloudtrail

Difference between AWS Cloudwatch and AWS Cloudtrail


1. AWS Cloudwatch: 

It is a monitoring tool used for real-time monitoring of AWS resources and applications. It provides a report on the basis of monitoring which can be used to analyze the performance of the system.  CloudWatch also detect irregular behavior in your environments. It also sets the alarm. It monitors various AWS resources like Amazon EC2, Amazon RDS, Amazon S3, Elastic Load Balancer, etc. 

2. AWS Cloudtrail:

 It is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. It continuously logs and monitors the activities and actions across your AWS account. It also provides the event history of your AWS account including information about who is accessing your AWS services.  Remediation actions are also taken by cloudtrail.      

S.No.AWS CloudwatchAWS Cloudtrail
1.It is mainly concerned with happenings on AWS resources.It is mainly concerned with what is done on AWS and by whom.
2.It is a monitoring service for AWS resources and applications.It records API activity in the AWS account.
3.Using Cloudwatch you can track metrics and monitor log files. You can also set alarm for various events.CloudTrail provides greater visibility into user activity by tracking AWS console actions including who made the call, from which IP address and when.
4.It specifically records the application logs.It provides information about what occurred in your AWS account.
5.It delivers metric data in 1 minute period for detailed monitoring and 5 minute periods for basic monitoring.It delivers an event within 15 minutes of the API call.
6.It stores data in its own dashboard in the form of metrics and logs.It can centralize all the logs across regions and even across many accounts and store them on S3 bucket.
7.It offers free basic monitoring resources by default, such as EC2 instances, RDS, etc.It is enabled by default when AWS account is created and start working from then.
8.AWS CloudWatch focuses on the health of aws resources.AWS CloudTrail focuses on the activities performed inside the aws environment.
9.It provides offers to collect, analyze and monitor applications health.It provides Auditing services for AWS accounts.
10.In CloudWatch, Logs are saved in Particular group.In CloudTrail, Logs are saved in S3 Bucket.
11.AWS CloudWatch is available in free and premium pricing models.AWS CloudTrail is free. If you want to put up a single trail to deliver a single copy of management events.
12.Companies using CloudWatch are Airbnb and 9GAG.Companies using CloudTrail are Netflix and Slack.

Tuesday, 30 May 2023

Introduction to Amazon CloudWatch Synthetics

 In this article, we will get an introduction to Amazon Cloudwatch Synthetics. With this feature, you can create different kinds of Canaries to continually verify your user experience even when you don’t have traffic, monitor and test for unusual behavior, and trace issues to their source for faster resolution.

Let’s start by navigating to CloudWatch synthetics.

Amazon CloudWatch synthetics allows you to create configurable scripts called canaries to continuously monitor your application endpoints and APIs. By using canaries you can discover issues with your website before your customers do. 

Here we already have two canaries monitoring our sample pet adoption site. 

Let’s create a third Canary. The canaries can be created from pre-existing blueprints from scripts that can be edited inline or by using a workflow builder or a canary recorder. 

For this canary, we’ll use the Heartbeat monitoring blueprint. The heartbeat scripts load the specified URL and store a screenshot of the page, an HTTP archive file, and logs of accessed URLs. We’ll enter the URL for our sample pet adoption site in this case.

Let’s move on without inputting any environment variables.

We’ll also retain the default settings for scheduling.

We’ll also keep the default timeout settings. If you have specific data retention policies you can change the default settings here to maintain compliance.

Under access permissions, you can choose whether to create a new identity and access management or IAM role that’s specific to this canary or use an existing role. We’ll create a new role.

You can allow synthetics to create CloudWatch alarms for your canary automatically. Let’s add an alarm that will notify us via an Amazon SNS, for if our canary fails two or more times in 15 minutes.

If your endpoint is under a virtual private cloud you can select it here.

CloudWatch synthetics can be integrated with AWS X-ray and CloudWatch service lens service maps. Let’s enable the integration and create the canary.

Our heartbeat monitoring canary has been successfully created. 

Canaries with tracing enabled also appear on the service map in both CloudWatch service lens and an X-ray. Returning to the top of the page we can select the monitoring tab to view graphs of the CloudWatch metrics published by this canary for the selected time range. The configuration tab details current configuration information for the canary including when it was last modified. 

Features:

  1. Canary Tests: CloudWatch Synthetics allows you to create canary tests that simulate user behavior and test the availability and performance of your applications, APIs, and websites. You can create canary tests using pre-built blueprints or by writing your own custom scripts.
  2. Monitoring: CloudWatch Synthetics monitors your canary tests in real-time and provides detailed metrics and logs for each test. It also alerts you when a test fails or experiences performance issues.
  3. Scripting: CloudWatch Synthetics provides a powerful scripting environment that allows you to create custom scripts to test your applications, APIs, and websites. You can use scripting languages such as JavaScript, Python, and PowerShell to write your scripts.
  4. Integrated with CloudWatch: CloudWatch Synthetics is fully integrated with Amazon CloudWatch, which means that you can use CloudWatch to view and analyze your canary test results alongside other metrics and logs.
  5. Simple Pricing: CloudWatch Synthetics is offered at a simple, pay-as-you-go pricing model. You only pay for what you use, and there are no upfront costs or minimum fees.