Monday, 2 January 2023

AWS Elastic Beanstalk

 AWS Elastic Beanstalk is an easy-to-use AWS service for deploying and scaling web applications and services developed with Python, Ruby, Java, .NET, PHP, Node.js, Go, and Docker on familiar servers such as Apache, Passenger, Nginx, and IIS.In this post, we are going to discuss what is Elastic Beanstalk, What is the advantage, and the working of Elastic Beanstalk.

With Elastic Beanstalk, you just have to upload your code and it automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.

Old Architecture vs Elastic Beanstalk

Check Also: Free AWS Training and Certifications

What Is AWS Elastic Beanstalk?

  • An orchestration service offered by AWS.
  • Used to deploy and scale web applications and services.
  • Support Java, Python, Ruby, .NET, PHP, Node.js, Go, Docker on familiar servers such as Apache, Passenger, Nginx, and IIS.
  • The fastest and simplest way to deploy your application to AWS.
  • It takes care of deployment, capacity provisioning, load balancing, auto-scaling, application health monitoring.
  • You have full control over the AWS resources.

Feature of Elastic Beanstalk

Also Check: AWS Inspector

Benefits Of Elastic Beanstalk AWS

  • Fast and simple to deploy: It is the simplest and fastest way to deploy your application on AWS. You just use the AWS Management Console, a Git repository, or an integrated development environment (IDE) such as Eclipse or Visual Studio to upload your application, and it automatically handles the deployment details of capacity provisioning, auto-scaling, load balancing, and application health monitoring. Within minutes, your application will be ready to use without any infrastructure or resource configuration work on your part.
  • Scalable: It automatically scales your application up and down based on your application’s need using easily adjustable Auto Scaling settings. For e.g, you can use CPU utilization metrics to trigger Auto Scaling actions. With this, your application can handle peaks in workload or traffic while minimizing your costs.
  • Developer productivity: Amazon Beanstalk Elastic provisions and operates the infrastructure and manages the application stack (platform) for you, so you don’t have to spend the time or develop the expertise. It also keeps the underlying platform running your application up-to-date with the latest patches and updates. So, you can focus on writing code rather than spending time managing and configuring servers, load balancers, databases, firewalls, and networks.
  • Complete infrastructure control: You are free to select the AWS resources, such as Amazon EC2 instance type, that are optimal for your application. Additionally, it lets you “open the hood” and allow you to full control over the AWS resources powering your application. If you decide you want to take over some (or all) of the elements of your infrastructure, you can do so seamlessly by using Amazon Elastic Beanstalk’s management capabilities.

Key Concept Of ElasticBeanstalk AWS

AWS Elastic Beanstalk enables you to manage all of the resources that run your application as environments. Here are some key concepts:

Elastic Beanstalk core architecture

  • Application: Amazon Elastic Beanstalk application is a logical collection of Elastic Beanstalk components, including environments, environment configurations and versions. In Elastic Beanstalk an application is conceptually similar to a folder.
  • Application version: An application version of Amazon Elastic Beanstalk refers to a specific, labelled iteration of deployable code for a web application. An application version points to an Amazon Simple Storage Service (Amazon S3) object that contains the deployable code, such as a Python WAR file.
  • Environment: An environment is a collection of AWS resources running an application version. Each environment runs only one application version at a time, still, you can able to run the same application version or different application versions in many environments simultaneously.
  • Environment configuration: An environment configuration determines a collection of parameters and settings that define how an environment and its associated resources behave. When you update an environment’s configuration settings, It automatically applies the changes to existing resources or deletes and deploys new resources depending on the changes you made.
  • Saved configuration: A saved configuration is a template that you can use as a starting point for creating unique environment configurations. You can create and modify saved configurations, and apply them to environments, using the Elastic Beanstalk console, AWS CLI, EB CLI, or API.
  • Platform: A platform is a combination of an operating system, programming language runtime, application server, web server, and Elastic Beanstalk components. It provide a variety of platforms on which you can build your applications.

How Does AWS Elastic Beanstalk Work?

By using Elastic Beanstalk, you create an application, upload an application version in the form of an application code bundle (for example, a Python .war file) to Elastic Beanstalk, and then provide some information about the application. Elastic Beanstalk automatically launches an environment, creates and configures the AWS resources needed to run your code. After your environment launch, you can then manage your environment and deploy new application versions. The following diagram illustrates the workflow of Elastic Beanstalk.

Working of Elastic Beanstalk

It supports the DevOps practice name “rolling deployments.” When enabled, your configuration deployments work hand in hand with Auto Scaling to ensure there are always a defined number of instances available as configuration changes made. It gives you control as Amazon EC2 instances are updated.

Read this blog where we have discussed 5 Pillars of AWS Well-Architected Framework and their design principles.

Hands-on: Creating an Elastic Beanstalk Application

  1. From the Compute Section click on Elastic Beanstack.
    AWS Management Console
  2. Click on Create Application under Elastic Beanstalk.
    AWS Elastic Beanstalk Console
  3. Give your application a name.
    Elastic Beanstalk Application Name
  4. In the Application tags section you can tag your application by giving key-value pairs.
    Tag for Elastic Beanstalk Application
  5. In the platform section, you have to choose a platform, platform branch, platform version (In Platform  Select PHP as the web application environment, In Platform branch you have to select the instance type on which your environment will going to create. I am using Amazon Linux 2, In Platform, version select the recommended version.)
    Runtime platform for Elastic Beanstalk Application
  6. In the Application code section you have to choose a sample application and then click on Create application.
    Sample Application for Elastic Beanstalk
  7. Once all the backend services will create then you will see the following screen. Click on the link to see your sample application.
    PHP application Elastic Beanstalk
  8. After clicking on the link you will see your application on the new tab of your browser.
    Running application on Elastic Beanstalk
    Now we created a running sample PHP application using Elastic Beanstalk


5 Pillars of AWS Well-Architected Framework

 5 Pillars of AWS Well-Architected Framework are as follows:

  • Operational Excellence
  • Security
  • Reliability
  • Performance Efficiency
  • Cost Optimization

Want to learn how to use the best architecting practice and take your skills to the next level with AWS Certified Solution Architect Associate Certification.

What Does Well-Architected Mean?

Essentially, a well-architected framework is a concept of designing and architecting cloud infrastructure for high-performing, resilient, and efficient infrastructure for their applications and workloads.
It must be done in such a way, it achieves the desired business use case and technical outcomes, all while optimizing costs.

Click here to read about a good example of how the DevOps principles are used in practice is AWS CloudFormation.

Why AWS Architectural Frameworks Necessary?

These pillars allow the architect to evaluate the infrastructure at hand, thereby allowing the Cloud to be utterly compliant while making use of the best practices at hand.

It took AWS Solutions Architects several years and experience to identify a set of best practices for architecting in AWS. Thus they designed the five pillars, finely tuned keeping in mind the underlying purpose of AWS: Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization.

Also read: This post covers the AWS Free Tier Account Overview. Amazon Web Services (AWS) is providing 12 months of Free Tier account to new subscribers to get hands-on experience with all the AWS cloud services.

Check Also: Free AWS Training and Certifications

Let’s discuss one by one what does each pillar dictates:

1. Operational Excellence

The Operational Excellence is the first pillar of AWS Well-Architected Framework, it includes the ability to support the development and run workloads effectively, while gaining insight into operations, and continuously improve processes and procedures to deliver business value.

AWS Well Architected Framework Operation Excellency
You can find prescriptive guidance on implementation in the Operational Excellence Pillar whitepaper.

Design Principles

There are five design principles for operational excellence in the cloud:

  • Perform operations as code: Limit human error and enable consistent responses to events.
  • Make frequent, small, reversible changes: Make changes in small increments that can be reversed if they fail.
  • Refine operations procedures frequently: As workload evolves,  procedures should evolve along.
  • Anticipate failure: Test on failure scenarios and identify potential sources of failure.
  • Learn from all operational failures: Learn from all operation event that fails and share across to the entire team.

Also Read: Our Previous Blog On Blue Green Deployment.

2. Security

Security on the cloud is a big concern for everyone on the cloud. Infrastructure should be designed such that it serves complete data protection while maintaining privileges to all AWS accounts, infrastructure protection, and identify any security breach with Detective Controls.

AWS Well Architected Framework SecurityYou can find prescriptive guidance on implementation in the Security Pillar whitepaper.

Design Principles

There are seven design principles for security in the cloud:

  • Implement a strong identity foundation: Providing the least required permission and enforce separation of duties through.
  • Enable traceability: Monitor, alert, and audit actions.
  • Apply security at all layers: Utilize multiple security controls.
  • Automate security best practices: Automated security mechanisms to improve security and scale more rapidly and cost-effectively.
  • Protect data in transit and at rest: Segregate data into sensitivity levels and use encryption, tokenization, and access control for data protection.
  • Keep people away from data: Eliminate the need for direct access or manual processing of data.
  • Prepare for security events: Create incident management and investigation policy.

Also Check : Our Blog post on AWS Secrets Manager

3. Reliability

Reliability the third pillar from 5 Pillars of AWS Well-Architected Framework, it encompasses the ability of a workload to successfully complete and consistently has linear performance when it’s expected to. This includes operate and test the workload through its total lifecycle.

AWS Well Architected Framework Reliability
You can find prescriptive guidance on implementation in the Reliability Pillar whitepaper.

Design Principles

There are five design principles for reliability in the cloud:

  • Automatically failure recovery: Use key performance indicators (KPIs) to automation trigger a certain action when a threshold is breached.
  • Test recovery procedures: Use automation to simulate different failures and test recovery procedures.
  • Scale horizontally to increase aggregate workload availability: (Distribute requests across multiple, smaller resources to reduce failure this can be achieved using an Elastic Load balancer.
  • Stop guessing capacity: Auto Scaling to automate the addition or removal of resources.
  • Manage change in automation: Changes to the infrastructure should be made using automation.

Check Out: Our Blog post on AWS SNS

4. Performance Efficiency

The Performance Efficiency fourth pillar of AWS Well-Architected Framework, it includes the ability to use compute resources in an efficient manner while maintaining performance with changing workloads. It requires continuous monitoring to ensures that you are aware of how the workload changes. What trade-offs must be made while architecting to improve performance efficiency?

AWS Well Architected Framework Performance Efficiency
You can find prescriptive guidance on implementation in the Performance Efficiency Pillar whitepaper.

Design Principles

There are five design principles for performance efficiency in the cloud:

  • Democratize advanced technologies: Delegating complex tasks to your cloud vendor.
  • Go global in minutes: Deploying your workload in multiple AWS Regions.
  • Use serverless architectures: remove the need for you to run and maintain physical servers for traditional compute activities.
  • Experiment more often: Carry out comparative testing using different types of instances, storage, or configurations.
  • Consider mechanical sympathy: Use the technology approach that aligns best with your workload.

Also Check: Our Blog on AWS Storage

5. Cost Optimization

The Cost Optimization pillar fifth pillar of AWS Well-Architected Framework, it includes the ability to run systems to deliver business value at the lowest price point.

AWS Well Architected Framework Cost Optimization
You can find prescriptive guidance on implementation in the Cost Optimization Pillar whitepaper.

Design Principles

There are five design principles for cost optimization in the cloud:

  • Implement cloud financial management: Utilize Cloud Financial Management /Cost Optimization to become a cost-efficient organization.
  • Adopt a consumption model: Pay only for the resources that you use,  increase or decrease resources depending on business requirements.
  • Measure overall efficiency: Monitor to know the gains you make from increasing output and reducing costs.
  • Stop spending money on undifferentiated heavy lifting: AWS Manages data center operations like racking, stacking, and powering the server.
  • Analyze and attribute expenditure: Accurately identify the usage and cost of systems. This provides transparent attribution of IT costs to individual workload owners

Types of offers in AWS

 

Types of offers in AWS

12-Months Free: Only new customers are eligible for 12 months of free service. If you exceed the allowed usage then you will be charged according to pay-as-you-go service rates.

Always Free: These services can be availed by all old and new customers. They don’t expire after 12 months of free usage. There are few limation on the usage but very advantageous when you perform POCs.

Trials: Services are available for short term and limited usage and the Azure free trial starts from the time you start using the service. After the period is over you can still avail of all the services by only paying pay-as-you-go service rates.

Note: The Amazon Free Tier isn’t available within the AWS GovCloud (US) Regions or the China (Beijing) region at this point. The Lambda free tier is out there within the AWS GovCloud (US) Region.

AWS Free Tier limits

All services that are provided with AWS Free Tier have limits and the usage is capped. Many services have multiple sorts of limits. for instance, Amazon EC2 has limits on both the type of instance you’ll use and the limited hours use every month for 12 months. Amazon S3 features a limit on what proportion storage you’ll use and on how often you’ll call certain operations monthly, for instance, Amazon Free Tier covers the primary 20,000 times you retrieve a file from Amazon S3, but you’re charged for extra file retrievals. Each service has limits that are unique to service.

Database limits

AWS Offer 4 DB Services and all can be available in an Azure Free account with few limitations of course.

RDS: It consists of two components, computes and storage. It is easier to line up, operate, and scale databases within the cloud. It provides cost-efficient, resizable capacity in an industry-standard electronic database and manages common database administration tasks.

DynamoDB: In this data storage pricing is pretty straight-forward: you’re charged a price per GB per month. This rate varies across regions but is between $0.25 and $0.30 per GB-month in most regions. The second pricing parameter, provisioned throughput, maybe a little more novel.

Amazon ElastiCache: It provides Caching-as-a-Service from Amazon Web Services. AWS simplifies fixing, managing, and scaling a distributed in-memory cache environment within the cloud.

Amazon Redshift: It is a fully managed petabyte-scale data warehouse service. Redshift delivers fast query and  I/O performance for virtually any size dataset by using columnar storage technology and parallelizing and distributing queries across multiple nodes.

Know more: here we have discussed what Elastic Beanstalk is, what is the advantage, and the working of Elastic Beanstalk.

Amazon-AWS-Database

Analytics Limits

Amazon Kinesis Data Analytics is the easiest method to research streaming data, gain actionable insights, and answer your business and customer needs. Amazon Kinesis Data Analytics reduces the complexity of the building, managing, and integrating streaming applications with other AWS services.

AWS Data Pipeline: Amazon Data Pipeline is an “infrastructure-as-a-service” web services that support automating the transport and transformation of knowledge. Pipelines reflect an ETL process that permits you to get more value from your data across multiple sources via data extraction and data transformation downstream to Amazon Web Services.

Amazon Elasticsearch Service (Amazon ES): It is a fully managed service that creates it easy for you to deploy, secure, operate, and scale Elasticsearch within the AWS Cloud so you’ll search, analyze, and visualize your data at scale.

Amazon-AWS-Free-Tier-Analytics

Compute Limits

The Compute category of services is key resources that allow you to carry out computational abilities via a series of instructions used by applications and systems. These resources cover a range of different services and features.

Amazon EC2: Amazon charges based on hours of usage, not based on the number of instances you’re running. It is designed to form web-scale cloud computing easier for developers. Amazon EC2’s simple web service interface allows you to get and configure capacity with minimal friction.

Amazon EC2 Container Registry (ECR): It is a secure, fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon EC2 Container Service (ECS), simplifying your development to production workflow.

AWS Lambda: Lambda fills the first role of the compute service on AWS. It also integrates with many other AWS services and, alongside API Gateway, DynamoDB and RDS form the idea for Serverless solutions for those using AWS.

Elastic Load Balancing: It distributes incoming application or network traffic across multiple targets. Elastic Load Balancing scales your load balancer as traffic to your application changes over time. It can automatically scale to the overwhelming majority of workloads.

Amazon-AWS-Free-Tier-Compute

Storage Limits

For most of the online applications, we’d like some sort of file storage or database storage. AWS provides S3 and EBS file storage.

AWS S3: It is a secure, durable, and scalable object storage, mostly used as file storage. S3 has been highly used for hosting web content with support for high bandwidth and demand. Scripts can also be stored in S3 making it possible to store static websites that use JavaScript.

Amazon EFS: Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing, and shrinking automatically as you add and remove files.

Amazon CloudFront: It is a global Content Delivery Network (CDN) service that securely delivers a website’s dynamic, static, and streaming content by making it available from a worldwide network of edge locations. Amazon CloudFront supports all kinds of files which will be served over HTTP.

Amazon Elastic Block Storage: It is durable block-level storage to be used with EC2 instances in the AWS cloud. EBS Volumes are used by mounting then onto EC2 instance like you will do with a physical hard drive in on-premise and then format the EBS volume to the desired file system.

Amazon-AWS-Free-Tier-StorageSome of the foremost common limits are by time, like hourly or by the minute, or by requests, which are the requests you send to the service.

Hourly usage in the AWS Free Tier

Some services, like Amazon EC2, Amazon RDS, and Elastic Load Balancing, charge for usage on an hourly basis. The AWS Free Tier for these services provides you with a monthly allotment of hours for the first 12 months.

Some example

  • AWS Free Tier for Amazon EC2 provides you with 750 hours usage of Linux (any combination of t1.micro and t2.micro  instances), plus 750 hours usage of Windows (any combination of t1.micro, t2.micro instances).To know more check
  • How you divide this allotment is up to you. for example, you can run 750 hours of a Linux t2.micro, or t1.micro instance with 750 hours of a Windows t2.micro, or t1.micro instance each month for the first 12 months.

In Regions where t2.micro isn’t available, the t3.micro equivalent is supported under AWS Free Tier.

free-ec2-usage

Amazon Machine Images

When you start an Amazon EC2 instance, you want to select an Amazon Machine Image (AMI) that’s eligible for the AWS Free Tier. Some AMIs aren’t eligible for the AWS Free Tier. Third-party applications or services from AWS Marketplace aren’t eligible for the AWS Free Tier. AMIs that are eligible for the AWS Free Tier are marked within the Amazon EC2 Launch Wizard as Free tier eligible.

Track AWS Free Tier Usage

You can track your Amazon Free Tier usage to assist you to stay under the AWS Free Tier limits. AWS automatically provides alerts through AWS Budgets to notify you by email once you exceed 85 percent of your AWS Free Tier limits for every service. you’ll also view the highest AWS Free Tier Services by Usage. The table on the Billing and price Management dashboard to ascertain which five services you’ve got used the foremost and how much you’ve got used them.