Tuesday, 5 July 2022

Amazon Aurora : Theory

Amazon Aurora

 

  • A fully managed relational database engine that’s compatible with MySQL and PostgreSQL.
  • With some workloads, Aurora can deliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL.
  • Aurora includes a high-performance storage subsystem. The underlying storage grows automatically as needed, up to 128 terabytes. The minimum storage is 10GB.

Amazon RDS, released in 2009, offers great promise for developers using MySQL.  For those running and managing instances within AWS cloud, database availability and consistency support have been highly beneficial features. Today it is compatible with Oracle, MSSQL, PostgreSQL and MariaDB. And then comes Aurora. Aurora, a proprietary database service created by AWS that provides higher levels of performance and scalability, joined the relational database portfolio in 2014 at AWS re:Invent. According to AWS SVP Andy Jass, Aurora is as capable as “…proprietary database engines at one tenth of the cost. Compatible with MySQL, Aurora aims to be an enterprise-class database solution.”

Aurora Features
According to AWS, Aurora is not only cheaper to run than other large scale commercial databases, but it is also much faster than the popular open source, MySQL. The service has increased the scalability of the popular open source database, enabling storage to be automatically provisioned as you go, which is a major advantage in a world where databases are still a main cause of performance bottlenecks. 
Scalability: Go Big Anytime
According to Amazon, Aurora is up to five times faster than the native MySQL deployment, making it ideal for large amounts of data and environments with high performance requirements. You can start with 10Gb of provisioned storage, and as you reach the capacity limit it will automatically increase by 10Gb increments, scaling all the way up to the size of a very large database with tens of TBs. DB cluster architecture can support an “active/active” configuration, where it is possible to have more than one writer. Although this architecture allows for higher levels of scalability, it also produces challenges in terms of coordination and synchronization. The more classic architecture, and what the database uses, is what we call “passive/active”, where only one entity at a time can write to the storage. You can scale out the Aurora DB cluster with as many readers (i.e. Aurora Replica) as required and performance will be guaranteed, at least in terms of reading from the database. In terms of writing, however, Aurora is limited to just one machine (i.e. Primary instance), and in that sense it is similar to RDS, as both require the provisioning of a specific instance for that purpose. You can always upscale your instance size in order to try and keep up with the writing performance.
Fault Tolerant: Go Ahead And…Fail!
In terms of architecture, as we already mentioned, Aurora uses the classic DB cluster architecture which is typically used in large, multiple database environments. A key principle is its single central storage for the database. As the storage the database employs is different from AWS EBS disks, this allows the ability to scale dynamically. AWS has developed a special storage backend for Aurora, which is probably stored in S3 (although we cannot be entirely sure), which will enable durability and inter-availability zone (AZ) replication. In comparison, traditional SAN datacenters store all of the databases to a disk, or the logical disks are stored in a large storage array, having the ability to logically connect to different servers. “An Aurora DB cluster is a fault tolerant by design. The cluster volume spans multiple Availability Zones in a single region, and each Availability Zone contains a copy of the cluster volume data. This functionality means that your DB cluster can tolerate the failure of an Availability Zone without any loss of data and only a brief interruption of service.” As mentioned, in an Aurora cluster there is a single writer instance and multiple readers that read from the disk. If an error occurs and the writer fails or crashes, a simple automatic failover process will take one of the readers and assign it a new role as a writer. As mentioned in AWS documentation, the fact that they are attached to the same storage location within the same network means that there is no recovery downtime or time where data needs to be copied to another location, making it highly available. In addition, the fact that there can be a lot of readers within a database where there are a lot of reads and queries going on enables higher performance levels, since processes can be implemented concurrently on different machines.
Amazon AWS Aurora Vs. RDS
Regular RDS deploys what we call a “DB instance”, a DB server that needs to be provisioned in advance by specifying the instance type and size of storage. Snapshots can be used to migrate to a larger scale, although this process doesn’t support seamless autoscale. You can have a multi AZ deployment, but since RDS needs to perform DB level replication, it is less efficient than the Aurora cluster option. This limitation is one of the key reasons why Aurora is more efficient and scalable than RDS, and therefore makes it a preferable option. Any use case where you have a lot of queries (BI, for example) is a good use case for Amazon Aurora since you have multiple data sources, points, and many queries being performed in parallel. In such cases, you can utilize multiple readers, which eliminates any bottlenecks.
Latest Aurora Updates – New Backtrack & GovCloud
We’ve all been in situations in which we wished there was an ‘Undo’ button to fix something we accidentally broke. Amazon Aurora now has this feature and it allows you to go back to a certain point in time without restoring data from a backup. This functionality can be enabled for all newly-deployed MySQL-compatible Aurora database clusters and MySQL-compatible clusters restored from a backup. Amazon also recently announced that customers who are utilizing GovCloud to back up sensitive data and to meet compliance needs, can now launch an Aurora instance within GovCloud region.
Automation of your Backup and Recovery
In terms of functionality, Aurora is formally part of AWS relational database services (RDS). Aurora supports almost all backup functionalities that are available with RDS, such as point in time recovery and automatic backup. It also supports manual snapshots, however, the snapshot mechanism operates slightly different on Aurora. Instead of acting like a regular snapshot with a disk, like RDS, a snapshot is taken of the backend storage. While not a huge difference by any means, you will notice that a few extra steps are needed in order to recover a fully operating cluster from a snapshot. Therefore, it is recommended to automate your Aurora recovery processes. When an Aurora DB cluster is created from a snapshot, only the backend database will be created, meaning that additional operations will be required to recover the readers and writer. Therefore, you have a multiple step process, rather than a single step process that is possible with RDS. If you are carrying out this process through the console or via an automation tool that has already provided a functionality such as Cloud Protection Manager (CPM), however, then you don’t need to worry about this issue as recovery is just a click away.
One Final Note
When migrating data to the cloud, there is always the vendor lock-in consideration. Even though Aurora claims to be 100% compatible with MySQL, there are no guarantees that it will stay this way forever. Enterprises on Amazon that are looking to move their Oracle, for example, and wish to leverage the benefits of a managed Database-as-a-Service (DBaaS), may find that Aurora is a valuable solution for them. AWS provides a variety of migration tools to help implement the switchover.
Cloud Protection Manager (CPM) now supports Disaster Recovery for Amazon Aurora
The good news is you can start protecting your cloud deployment properly with full cross-region and cross-account disaster recovery now available for Amazon Aurora clusters. We’re extremely excited about supporting Amazon Aurora because typically a full backup and recovery might traditionally take about 2 hours, whereas it can now be done in about 2 minutes. Start your free trial today to ensure implementing an automated robust, scalable, enterprise-class cloud backup and recovery solution.

DB Cluster Configurations

    • Aurora supports two types of instance classes
      • Memory Optimized
      • Burstable Performance
    • Aurora Serverless is an on-demand, autoscaling configuration for Amazon Aurora (supports both MySQL and PostgreSQL). An Aurora Serverless DB cluster automatically starts up, shuts down, and scales up or down capacity based on your application’s needs.
      • A non-Serverless DB cluster for Aurora is called a provisioned DB cluster.
      • Instead of provisioning and managing database servers, you specify Aurora Capacity Units (ACUs). Each ACU is a combination of processing and memory capacity.
      • You can choose to pause your Aurora Serverless DB cluster after a given amount of time with no activity. The DB cluster automatically resumes and services the connection requests after receiving requests.
      • Aurora Serverless does not support fast failover, but it supports automatic multi-AZ failover.
      • The cluster volume for an Aurora Serverless cluster is always encrypted. You can choose the encryption key, but not turn off encryption.
      • You can set the following specific values:
        • Minimum Aurora capacity unit – Aurora Serverless can reduce capacity down to this capacity unit.
        • Maximum Aurora capacity unit – Aurora Serverless can increase capacity up to this capacity unit.
        • Pause after inactivity – The amount of time with no database traffic to scale to zero processing capacity.
      • You pay by the second and only when the database is in use. 
      • You can share snapshots of Aurora Serverless DB clusters with other AWS accounts or publicly. You also have the ability to copy Aurora Serverless DB cluster snapshots across AWS regions.
    • Limitations of Aurora Serverless
      • Aurora Serverless supports specific MySQL and PostgreSQL versions only.
      • The port number for connections must be:
        • 3306 for Aurora MySQL
        • 5432 for Aurora PostgreSQL
      • You can’t give an Aurora Serverless DB cluster a public IP address. You can access an Aurora Serverless DB cluster only from within a virtual private cloud (VPC) based on the Amazon VPC service.
      • Each Aurora Serverless DB cluster requires two AWS PrivateLink endpoints. If you reach the limit for PrivateLink endpoints within your VPC, you can’t create any more Aurora Serverless clusters in that VPC.
      • A DB subnet group used by Aurora Serverless can’t have more than one subnet in the same Availability Zone.
      • Changes to a subnet group used by an Aurora Serverless DB cluster are not applied to the cluster.
      • Aurora Serverless doesn’t support the following features:
        • Loading data from an Amazon S3 bucket
        • Saving data to an Amazon S3 bucket
        • Invoking an AWS Lambda function with an Aurora MySQL native function
        • Aurora Replicas
        • Backtrack
        • Multi-master clusters
        • Database cloning
        • IAM database authentication
        • Restoring a snapshot from a MySQL DB instance
        • Amazon RDS Performance Insights
    • When you reboot the primary instance of an Aurora DB cluster, RDS also automatically restarts all of the Aurora Replicas in that DB cluster. When you reboot the primary instance of an Aurora DB cluster, no failover occurs. When you reboot an Aurora Replica, no failover occurs.
    • Deletion protection is enabled by default when you create a production DB cluster using the AWS Management Console. However, deletion protection is disabled by default if you create a cluster using the AWS CLI or API.
      • For Aurora MySQL, you can’t delete a DB instance in a DB cluster if both of the following conditions are true:
        • The DB cluster is a Read Replica of another Aurora DB cluster.
        • The DB instance is the only instance in the DB cluster.
  • Aurora Multi Master
    • The feature is available on Aurora MySQL 5.6 
    • Allows you to create multiple read-write instances of your Aurora database across multiple Availability Zones, which enables uptime-sensitive applications to achieve continuous write availability through instance failure. 
    • In the event of instance or Availability Zone failures, Aurora Multi-Master enables the Aurora database to maintain read and write availability with zero application downtime. There is no need for database failovers to resume write operations.

Tags

    • You can use Amazon RDS tags to add metadata to your RDS resources.
    • Tags can be used with IAM policies to manage access and to control what actions can be applied to the RDS resources.
    • Tags can be used to track costs by grouping expenses for similarly tagged resources.

Monitoring

    • Subscribe to Amazon RDS events to be notified when changes occur with a DB instance, DB cluster, DB cluster snapshot, DB parameter group, or DB security group.
    • Database log files
    • RDS Enhanced Monitoring — Look at metrics in real time for the operating system.
    • RDS Performance Insights monitors your Amazon RDS DB instance load so that you can analyze and troubleshoot your database performance.
    • Use CloudWatch Metrics, Alarms and Logs

Security

    • Use IAM to control access.
    • To control which devices and EC2 instances can open connections to the endpoint and port of the DB instance for Aurora DB clusters in a VPC, you use a VPC security group.
    • You can make endpoint and port connections using Transport Layer Security (TLS) / Secure Sockets Layer (SSL). In addition, firewall rules can control whether devices running at your company can open connections to a DB instance.
    • Use RDS encryption to secure your RDS instances and snapshots at rest.
    • You can authenticate to your DB cluster using AWS IAM database authentication. IAM database authentication works with Aurora MySQL and Aurora PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB cluster. Instead, you use an authentication token, which is a unique string of characters that Amazon Aurora generates on request.
  • Aurora for MySQL
    • Performance Enhancements
      • Push-Button Compute Scaling
      • Storage Auto-Scaling
      • Low-Latency Read Replicas
      • Serverless Configuration
      • Custom Database Endpoints
      • Fast insert accelerates parallel inserts sorted by primary key.
      • Aurora MySQL parallel query is an optimization that parallelizes some of the I/O and computation involved in processing data-intensive queries.
      • You can use the high-performance Advanced Auditing feature in Aurora MySQL to audit database activity. To do so, you enable the collection of audit logs by setting several DB cluster parameters.
    • Scaling
      • Instance scaling – scale your Aurora DB cluster by modifying the DB instance class for each DB instance in the DB cluster.
      • Read scaling – as your read traffic increases, you can create additional Aurora Replicas and connect to them directly to distribute the read load for your DB cluster.

Feature

Amazon Aurora Replicas

MySQL Replicas

Number of Replicas

Up to 15 

Up to 5

Replication type

Asynchronous

(milliseconds)

Asynchronous

(seconds)

Performance impact on primary 

Low

High

Act as failover target

Yes (no data loss)

Yes

(potentially minutes of data loss) 

Automated failover

Yes

No

Support for user-defined replication delay

No

Yes

Support for different data or schema vs. primary

No

Yes

  • Aurora for PostgreSQL
    • Performance Enhancements
      • Push-button Compute Scaling
      • Storage Auto-Scaling
      • Low-Latency Read Replicas
      • Custom Database Endpoints
    • Scaling
      • Instance scaling
      • Read scaling
    • Amazon Aurora PostgreSQL now supports logical replication. With logical replication, you can replicate data changes from your Aurora PostgreSQL database to other databases using native PostgreSQL replication slots, or data replication tools such as the AWS Database Migration Service.
    • Rebooting the primary instance of an Amazon Aurora DB cluster also automatically reboots the Aurora Replicas for that DB cluster, in order to re-establish an entry point that guarantees read/write consistency across the DB cluster.
    • You can import data (supported by the PostgreSQL COPY command) stored in an Amazon S3 bucket into a PostgreSQL table.

Monday, 4 July 2022

AWS BACKUP : Theory

 AWS services using AWS Backup :

Reliable and consistent backup of cloud data is important to have a secure data archive and restore in the event of data loss. Traditionally, backup administrators used manual backup procedures that were difficult to scale and lowered productivity, with data distributed across multiple cloud resources. Cloud backup, along with the automation of infrastructure provisioning, are prompting organizations to improve their data protection and backup strategy.
Customers are implementing infrastructure as code (IaC) as an essential part of their digital transformation to improve productivity and govern infrastructure operations across multi-account environments. Implementing backup policies as code can help you scale your enterprise data protection strategy, reduce overhead, and easily manage organization-wide policies at scale across your cloud environment. AWS Backup offers a cost-effective, fully managed, policy-based managed service that simplifies data protection at scale. AWS Backup leverages AWS Organizations to centrally automate backup policies to implement, configure, manage, and govern backup activity across supported AWS resources.
In previous blog posts, our colleague Cher covered how AWS Backup can simplify centralized backup management with Cross-Region copy and secure data recovery with cross-account, cross-Region backup by implementing backup policies using the AWS Backup console. In this blog post, we demonstrate how you can save time using AWS CloudFormation automation to centrally automate and scale the process of implementing AWS Backup policies, backup vaults, and cross-Region, cross-account replication across your multi-account AWS environment. Using this solution, you can easily manage AWS Backup with automation and implement a data protection strategy that mitigates the risk of data loss.

Overview :

The architecture uses AWS Control Tower and consists of four AWS Organizations accounts: a management account, two member accounts, and a centralized backup account that belong to their respective organizational units. AWS Organizations helps to centrally manage, govern, automate, and scale AWS resources across accounts in an organization. The solution uses AWS CloudFormation stacks and StackSets to deploy resources such as backup vaults, vault access policies, AWS Key Management Service (AWS KMS) customer managed keys (CMK), and IAM roles.
AWS Backup policies define how you back up your resources using backup plans. Backup policies are written in plaintext files and structured according to the JSON rules. You can attach a backup policy to any elements of your organization’s hierarchy such as AWS accounts or organizational unit (OU). The backup policy specifies the final backup plan settings that apply to an AWS account within the organization. To facilitate the inheritance of effective backup policy in our solution, we attach the backup policies to the Root OU using an AWS Lambda function. We also leverage tags to add AWS resources in each member account to the backup policies.
A successful deployment of this solution can help you perform automated backups using centralized backup policies across your organization. You can conduct on-demand restore operation across your member accounts.



EFS (Elastic file system) practical :

  Launch one Ec2 instance:


Install httpd(apache server)


Create Efs :






Attach EFS on Linux Instance:





Create an one more Instance :


Again Install httpd(apache server)



Install  nfs :





Click your browser with different ip :




AWS Elastic file system Theory

 ELASTIC FILE SYSTEM (EFS):

Amazon Elastic File System (Amazon EFS) provides a simple, serverless, set-and-forget elastic file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS has a simple web services interface that allows you to create and configure file systems quickly and easily. The service manages all the file storage infrastructure for you, meaning that you can avoid the complexity of deploying, patching, and maintaining complex file system configurations. Amazon EFS supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol, so the applications and tools that you use today work seamlessly with Amazon EFS. Multiple compute instances, including Amazon EC2, Amazon ECS, and AWS Lambda, can access an Amazon EFS file system at the same time, providing a common data source for workloads and applications running on more than one compute instance or server. With Amazon EFS, you pay only for the storage used by your file system and there is no minimum fee or setup cost. Amazon EFS offers a range of storage classes designed for different use cases. These include: • Standard storage classes – EFS Standard and EFS Standard–Infrequent Access (Standard–IA), which offer multi-AZ resilience and the highest levels of durability and availability. • One Zone storage classes – EFS One Zone and EFS One Zone–Infrequent Access (EFS One Zone– IA), which offer customers the choice of additional savings by choosing to save their data in a single Availability Zone.


Overview:

Amazon EFS provides a simple, serverless, set-and-forget elastic file system. With Amazon EFS, you can create a file system, mount the file system on an Amazon EC2 instance, and then read and write data to and from your file system. You can mount an Amazon EFS file system in your virtual private cloud (VPC), through the Network File System versions 4.0 and 4.1 (NFSv4) protocol. We recommend using a current generation Linux NFSv4.1 client, such as those found in the latest Amazon Linux, Amazon Linux 2, Red Hat, Ubuntu, and macOS Big Sur AMIs, in conjunction with the Amazon EFS mount helper. For instructions, see Using the amazon-efs-utils Tools (p. 51). For a list of Amazon EC2 Linux and macOS Amazon Machine Images (AMIs) that support this protocol, see NFS support (p. 387). For some AMIs, you must install an NFS client to mount your file system on your Amazon EC2 instance. For instructions, see Installing the NFS client (p. 388). You can access your Amazon EFS file system concurrently from multiple NFS clients, so applications that scale beyond a single connection can access a file system. Amazon EC2 and other AWS compute instances running in multiple Availability Zones within the same AWS Region can access the file system, so that many users can access and share a common data source. For a list of AWS Regions where you can create an Amazon EFS file system, see the Amazon Web Services General Reference. To access your Amazon EFS file system in a VPC, you create one or more mount targets in the VPC. • For file systems using Standard storage classes, you can create a mount target in each availability Zone in the AWS Region. • For file systems using One Zone storage classes, you create only a single mount target that is in the same Availability Zone as the file system. 




Features

  • The service manages all the file storage infrastructure for you, avoiding the complexity of deploying, patching, and maintaining complex file system configurations.
  • EFS supports the Network File System version 4 protocol.
  • You can mount EFS filesystems onto EC2 instances running Linux or MacOS Big Sur. Windows is not supported.
  • Aside from EC2 instances, you can also mount EFS filesystems on ECS tasks, EKS pods, and Lambda functions.
  • Multiple Amazon EC2 instances can access an EFS file system at the same time, providing a common data source for workloads and applications running on more than one instance or server.
  • EFS file systems store data and metadata across multiple Availability Zones in an AWS Region.
  • EFS file systems can grow to petabyte scale, drive high levels of throughput, and allow massively parallel access from EC2 instances to your data.
  • EFS provides file system access semantics, such as strong data consistency and file locking.
  • EFS enables you to control access to your file systems through Portable Operating System Interface (POSIX) permissions.
  • Moving your EFS file data can be managed simply with AWS DataSync – a managed data transfer service that makes it faster and simpler to move data between on-premises storage and Amazon EFS.
  • You can schedule automatic incremental backups of your EFS file system using the EFS-to-EFS Backup solution.
  • Amazon EFS Infrequent Access (EFS IA) is a new storage class for Amazon EFS that is cost-optimized for files that are accessed less frequently. Customers can use EFS IA by creating a new file system and enabling Lifecycle Management. With Lifecycle Management enabled, EFS automatically will move files that have not been accessed for 30 days from the Standard storage class to the Infrequent Access storage class. To further lower your costs in exchange for durability, you can use the EFS IA-One Zone storage class.


Performance Modes

  • General purpose performance mode (default)
    • Ideal for latency-sensitive use cases.
  • Max I/O mode
    • Can scale to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for file operations.

Throughput Modes

  • Bursting Throughput mode (default)
    • Throughput scales as your file system grows.
  • Provisioned Throughput mode
    • You specify the throughput of your file system independent of the amount of data stored.

Mount Targets

  • To access your EFS file system in a VPC, you create one or more mount targets in the VPC. A mount target provides an IP address for an NFSv4 endpoint.
  • You can create one mount target in each Availability Zone in a region.
  • You mount your file system using its DNS name, which will resolve to the IP address of the EFS mount target. Format of DNS is
    File-system-id.efs.aws-region.amazonaws.com

AWS Training Amazon EFS

  • When using Amazon EFS with an on-premises server, your on-premises server must have a Linux based operating system.

Access Points

  • EFS Access Points simplify how applications are provided access to shared data sets in an EFS file system. 
  • EFS Access Points work together with AWS IAM and enforce an operating system user and group, and a directory for every file system request made through the access point. 

Components of a File System

  • ID
  • creation token
  • creation time
  • file system size in bytes
  • number of mount targets created for the file system
  • file system state
  • mount target

Data Consistency in EFS

  • EFS provides the open-after-close consistency semantics that applications expect from NFS.
  • Write operations will be durably stored across Availability Zones.
  • Applications that perform synchronous data access and perform non-appending writes will have read-after-write consistency for data access.

Managing File Systems

  • You can create encrypted file systems. EFS supports encryption in transit and encryption at rest.
  • Managing file system network accessibility refers to managing the mount targets:
    • Creating and deleting mount targets in a VPC
    • Updating the mount target configuration
  • You can create new tags, update values of existing tags, or delete tags associated with a file system.
  • The following list explains the metered data size for different types of file system objects.
    • Regular files – the metered data size of a regular file is the logical size of the file rounded to the next 4-KiB increment, except that it may be less for sparse files.
      • A sparse file is a file to which data is not written to all positions of the file before its logical size is reached. For a sparse file, if the actual storage used is less than the logical size rounded to the next 4-KiB increment, Amazon EFS reports actual storage used as the metered data size.
    • Directories – the metered data size of a directory is the actual storage used for the directory entries and the data structure that holds them, rounded to the next 4 KiB increment. The metered data size doesn’t include the actual storage used by the file data.
    • Symbolic links and special files – the metered data size for these objects is always 4 KiB.
  • File system deletion is a destructive action that you can’t undo. You lose the file system and any data you have in it, and you can’t restore the data. You should always unmount a file system before you delete it.
  • You can use AWS DataSync to automatically, efficiently, and securely copy files between two Amazon EFS resources, including file systems in different AWS Regions and ones owned by different AWS accounts.  Using DataSync to copy data between EFS file systems, you can perform one-time migrations, periodic ingest for distributed workloads, or automate replication for data protection and recovery.
  • File systems created using the Amazon EFS console are automatically backed up daily through AWS Backup with a retention of 35 days. You can also disable automatic backups for your file systems at any time.
  • Amazon Cloudwatch Metrics can monitor your EFS file system storage usage, including the size in each of the EFS storage classes.

Mounting File Systems

  • To mount your EFS file system on your EC2 instance, use the mount helper in the amazon-efs-utils package.
  • You can mount your EFS file systems on your on-premises data center servers when connected to your Amazon VPC with AWS Direct Connect or VPN.
  • You can use fstab to automatically mount your file system using the mount helper whenever the EC2 instance is mounted on reboots.

Lifecycle Management

  • You can choose from five EFS Lifecycle Management policies (7, 14, 30, 60, or 90 days) to automatically move files into the EFS Infrequent Access (EFS IA) storage class and save up to 85% in cost.

Monitoring File Systems

  • Amazon CloudWatch Alarms
  • Amazon CloudWatch Logs
  • Amazon CloudWatch Events
  • AWS CloudTrail Log Monitoring
  • Log files on your file system

Security

  • You must have valid credentials to make EFS API requests, such as create a file system.
  • You must also have permissions to create or access resources.
  • When you first create the file system, there is only one root directory at /. By default, only the root user (UID 0) has read-write-execute permissions.
  • Specify EC2 security groups for your EC2 instances and security groups for the EFS mount targets associated with the file system.
  • You can use AWS IAM to manage Network File System (NFS) access for Amazon EFS. You can use IAM roles to identify NFS clients with cryptographic security and use IAM policies to manage client-specific permissions.

Pricing

  • You pay only for the storage used by your file system.
  • Costs related to Provisioned Throughput are determined by the throughput values you specify.

EFS vs EBS vs S3

  • Performance Comparison

Amazon EFS

Amazon EBS Provisioned IOPS

Per-operation latency

Low, consistent latency.

Lowest, consistent latency.

Throughput scale

Multiple GBs per second

Single GB per second

  • Performance Comparison

Amazon EFS

Amazon S3

Per-operation latency

Low, consistent latency.

Low, for mixed request types, and integration with CloudFront.

Throughput scale

Multiple GBs per second

Multiple GBs per second

    Storage Comparison

Amazon EFS

Amazon EBS Provisioned IOPS

Availability and durability

Data are stored redundantly across multiple AZs.

Data are stored redundantly in a single AZ.

Access

Up to thousands of EC2 instances from multiple AZs can connect concurrently to a file system.

A single EC2 instance in a single AZ can connect to a file system.

Use cases

Big data and analytics, media processing workflows, content management, web serving, and home directories.

Boot volumes, transactional and NoSQL databases, data warehousing, and ETL.


AWS Serverless Application Repository practical

 STEP 1: Create a application 



STEP2 : 



STEP3 :

                                                                                 



STEP4 :



AWS S3 Bucket - Practical

 S3 Configuration

Services -> S3

Create Amazon S3 Bucket (Source Bucket)

Click on Create bucket.

  • Bucket Name: your_source_bucket_name
  • Region: US East (N. Virginia)

Note: Every S3 bucket name is unique globally, so create the bucket with a name not currently in use.

Leave other settings as default and click on the Create button.


Once the bucket is created successfully, select your S3 bucket (click on the checkbox).

Click on the Copy Bucket ARN to copy the ARN.

  • arn:aws:s3:::zacks-source-bucket

Save the source bucket ARN in a text file for later use.


Create Amazon S3 Bucket (Destination Bucket)

Click on Create bucket.

  • Bucket Name: your_destination_bucket_name
  • Region: US East (N. Virginia)

Note: Every S3 bucket name is unique globally, so create the bucket with a name not currently in use.

Leave other settings as default and click on the Create button.


Once the bucket is created successfully, select your S3 bucket (click on the checkbox).

Click on the Copy Bucket ARN to copy the ARN.

  • arn:aws:s3:::zacks-destination-bucket

Save the source bucket ARN in a text file for later use.