Tuesday, 3 January 2023

AWS Storage

 Cloud Storage Overview

Cloud storage is a cloud computing model that stores data on the Internet through a cloud computing provider that manages and operates data storage as a service. It’s delivered on-demand with just-in-time capacity and costs and eliminates buying and managing your own data storage infrastructure.

cloud storage

Also Read: Our Blog post on AWS Secrets Manager

There Are 3 Types of Cloud Storage

1. Object Storage – Applications developed within the cloud often cash in on object storage’s vast scalability and metadata characteristics. Object storage solutions like  Simple Storage Service (Amazon S3) and Amazon Glacier are ideal for building modern applications from scratch that need scale and adaptability, and may even be wont to import existing data stores for analytics, backup, or archive.

2. File Storage – Many applications got to access shared files and need a filing system. this sort of storage is usually supported with a Network Attached Storage (NAS) server. File storage solutions like Elastic File System (Amazon EFS)are ideal to be used in cases like large content repositories, development environments, media stores, or user home directories.

3. Block Storage – Other enterprise applications like databases or ERP systems often require dedicated, low latency storage for every host. this is often analogous to direct-attached storage (DAS) or a cargo area Network (SAN). Block-based cloud storage solutions like Elastic Block Store (Amazon EBS) and EC2 Instance Storage

Storage Offered By Amazon Web Services (AWS)

type of storage

Check out: AWS Free Tier Account Services

1.  Simple Storage Service (Amazon S3)

Amazon S3 the oldest and most supported storage platform of AWS uses an object storage model that is built to store and retrieve any amount of data. Data can be accessed everywhere such as websites, mobile apps, corporate applications, and data from IoT sensors or devices that can be dumped onto S3.

Usage

S3 has been highly used for hosting web content with support for high bandwidth and demand. Scripts can also be stored in S3 making it possible to store static websites that use JavaScript.It supports the migration of data to Amazon Glacial for cold storage, by using lifecycle management rule for data stored in S3.

Features

Amazon S3 runs upon the world’s largest global cloud infrastructure and was built from the ground up to deliver a customer promise of 99.999999999% durability. Data is automatically distributed across a minimum of three physical facilities that are geographically separated within an AWS Region, and also automatically replicates data to any other AWS Region.

Security

S3 supports 3 forms of encryption, including server-side-encryption and client-side-encryption. Data in S3 can only be accessed by other users or AWS account when they have been granted access by the admin by writing the access policy. With the support of Multi-Factor Authentication (MFA) another layer of security can be added for object operation.S3 supports multiple security standards and compliance certifications.

2. Amazon Glacier

Amazon Glacier is a secure, durable, and extremely low-cost storage service for data archiving and long-term storage. Glacier allows you to run powerful analytics directly on archived data. The glacier can also utilize other AWS storage services such as S3, CloudFront, etc. to move data in and out seamlessly for better and effective results.

Usage

Amazon Glacier is used to stores data in the form of archives. An archive can consist of a single file, or a combination of several files as a single archive, and the archives are organized in vaults. With support to querying on data to retrieve a particular subset of data that you need from within an archive.

Feature

Since AWS Glacier is an archiving service, durability must be of utmost priority. The glacier is designed to provide average annual durability of 99.999999999% for archives. Data is automatically distributed across a minimum of three physical facilities that are geographically separated within an AWS Region.

Security

Initially, Glacial data can only be access by the account owner/admin, however, access control can be set up for other people by defining access rules in AWS Identity and Access Management (IAM) service. Glacier uses server-side encryption to encrypt all data. Lockable policies can be defined to lock vaults for long-term records retention.

Amazon Web Services Storage Type S3 and glacier

3.  Elastic File System (Amazon EFS)

As the name suggests EFS delivers a scalable, elastic, highly available, and highly durable network file system as-a-service. The storage capacity of EFS is elastic and is capable of growing and shrinking automatically depending on the requirement. EFS supports Network File System versions 4 (NFSv4) and 4.1 (NFSv4.1).

Usage

EFS is a network file system that can expand to petabytes with parallel access from EC2 instances. Elastic File System EFS is mounted on Amazon EC2 instances. Multiple EC2 instances can even share a Single EFS file system, allowing access to the large application that grew beyond a single instance. EFS can be mounted on-premises data center connected with Amazon Virtual Private Cloud (VPC) with AWS Direct Connect service.

Feature

EFS stores data as objects and each object is stored across multiple availability zones within a region. More durable than S3. Ability to make API calls.

Security 

There are three main levels of access controls to consider when it comes to the EFS file system.

  1. IAM permissions for API calls
  2. Security groups for EC2 instances and mount targets
  3. Network File System-level users, groups, and permissions.

AWS allows connectivity between EC2 instances and EFS file systems. You can associate one security group with an EC2 instance and another security group with an EFS mount target associated with the file system. These security groups act as firewalls and enforce rules that define the traffic flow between EC2 instances and EFS file systems.

4.  Elastic Block Store (Amazon EBS)

Similar to EFS, EBS volumes are network file systems. Volumes get automatically replicated within Availability Zones for high availability and durability.

Usage

It is durable block-level storage to be used with EC2 instances in the AWS cloud. EBS Volumes are used by mounting them onto EC2 instance as you will do with a physical hard drive on-premise and then format the EBS volume to the desired file system. EBS allows for dynamically increasing capacity, performance tuning and you can even change the type of volume with any downtime or performance impact.

Features

EBS allows for save point-in-time snapshots of volumes to increases the durability of the data stored. Each separate volume can be configured as EBS General Purpose (SSD), Provisioned IOPS (SSD), Throughput Optimized (HDD), or Cold (HDD) as needed.EBS Volumes has a very low failure rate of about 0.1 to 0.2 percent.

Security

IAM policy is needed to be defined to allow access to EBS volumes. Coupled with encryption for data-at-rest and data-in-motion security it offers a strong defense-in-depth security strategy for your data.

EBS-Encryption

5.  EC2 Instance Storage

EC2 Instance storage provides temporary block-level storage for EC2 instances.

Usage

Instance storage volumes are ideal for the temporary storage of data that changes frequently like buffers, queue caches, and scratch data. It can only be employed by one EC2 instance meaning volumes can’t be detached and attached to a different instance.

Features

Uses SSDs to deliver high random I/O performance, not intended to be used as durable disk storage. Data durability is provided through replication, or by periodically copying data to durable storageData on EC2 volume only persist during the lifetime of EC2 instance that it’s been related to

Security

IAM policy is required to be defined to permit secure control to users for performing operations like launch and termination of EC2 instances. When you stop or terminate an instance, the applications and data are erased and thus making the info inaccessible to a different instance within the future.

type of storage

6. Amazon FSx

Amazon FSx is a completely managed third-party file system solution. Amazon FSx utilizes SSD storage to provide fast performance with low latency.

It provides two file systems to choose from:

  • Amazon FSx for Windows File Server
  • Amazon FSx for Lustre

Usage

With the use of Amazon FSx, you can utilize the rich feature sets and fast performance of widely-used open source and commercially licensed file systems, while avoiding time-consuming administrative tasks like hardware provisioning, software configuration, patching, and backups. FSx provides cost-efficient capacity with high levels of reliability and integrates with a broad portfolio of AWS services to enable faster innovation.

Features

Amazon FSx provides a wide range of Solid-State Disk (SSD) and Hard Disk Drive (HDD) storage options enabling you to optimize storage price and performance for your workload requirements. It delivers sustained high read and writes speeds and consistent low latency data access.

Security

It automatically encrypts your data at rest using AWS KMS and in-transit using SMB Kerberos session keys. It is designed to meet the highest security standards and has been assessed to comply with ISO, PCI-DSS, and SOC compliance, and is HIPAA eligible.

Benefits Of AWS Storage

  • No upfront cost it is a pay-as-you-go model.
  • Worldwide access: You can access all your data worldwide just using an internet connection
  • Storage can be increased or decreased with changes in data size.
  • Low-cost data storage with high durability and high availability
  • Plenty of choices for backing/archiving data in case of disaster recovery.

Over the years, Amazon Web Services (AWS) storage has been diversified vastly to cater to varying needs.  With the vastly increasing data, new data storage technologies have transformed and are still evolving day by day.

Monday, 2 January 2023

What is DNS

 

DNS, or the Domain Name System, translates human readable domain names (for example, www.amazon.com) to machine readable IP addresses (for example, 192.0.2.44).

 

What is DNS?
DNS Basics

All computers on the Internet, from your smart phone or laptop to the servers that serve content for massive retail websites, find and communicate with one another by using numbers. These numbers are known as IP addresses. When you open a web browser and go to a website, you don't have to remember and enter a long number. Instead, you can enter a domain name like example.com and still end up in the right place.

A DNS service such as Amazon Route 53 is a globally distributed service that translates human readable names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. The Internet’s DNS system works much like a phone book by managing the mapping between names and numbers. DNS servers translate requests for names into IP addresses, controlling which server an end user will reach when they type a domain name into their web browser. These requests are called queries.


Authoritative DNS: An authoritative DNS service provides an update mechanism that developers use to manage their public DNS names. It then answers DNS queries, translating domain names into IP address so computers can communicate with each other. Authoritative DNS has the final authority over a domain and is responsible for providing answers to recursive DNS servers with the IP address information. Amazon Route 53 is an authoritative DNS system.

Recursive DNS: Clients typically do not make queries directly to authoritative DNS services. Instead, they generally connect to another type of DNS service known a resolver, or a recursive DNS service. A recursive DNS service acts like a hotel concierge: while it doesn't own any DNS records, it acts as an intermediary who can get the DNS information on your behalf. If a recursive DNS has the DNS reference cached, or stored for a period of time, then it answers the DNS query by providing the source or IP information. If not, it passes the query to one or more authoritative DNS servers to find the information.

Get Started with AWS for Free

Create a Free Account
Or Sign In to the Console

Receive twelve months of access to the AWS Free Tier and enjoy AWS Basic Support features including, 24x7x365 customer service, support forums, and more.

Please note that Amazon Route 53 is not currently available on the AWS Free Tier.












































The following diagram gives an overview of how recursive and authoritative DNS services work together to route an end user to your website or application.

  1. A user opens a web browser, enters www.example.com in the address bar, and presses Enter.
  2. The request for www.example.com is routed to a DNS resolver, which is typically managed by the user's Internet service provider (ISP), such as a cable Internet provider, a DSL broadband provider, or a corporate network.
  3. The DNS resolver for the ISP forwards the request for www.example.com to a DNS root name server.
  4. The DNS resolver for the ISP forwards the request for www.example.com again, this time to one of the TLD name servers for .com domains. The name server for .com domains responds to the request with the names of the four Amazon Route 53 name servers that are associated with the example.com domain.
  5. The DNS resolver for the ISP chooses an Amazon Route 53 name server and forwards the request for www.example.com to that name server.
  6. The Amazon Route 53 name server looks in the example.com hosted zone for the www.example.com record, gets the associated value, such as the IP address for a web server, 192.0.2.44, and returns the IP address to the DNS resolver.
  7. The DNS resolver for the ISP finally has the IP address that the user needs. The resolver returns that value to the web browser. The DNS resolver also caches (stores) the IP address for example.com for an amount of time that you specify so that it can respond more quickly the next time someone browses to example.com. For more information, see time to live (TTL).
  8. The web browser sends a request for www.example.com to the IP address that it got from the DNS resolver. This is where your content is, for example, a web server running on an Amazon EC2 instance or an Amazon S3 bucket that's configured as a website endpoint.
  9. The web server or other resource at 192.0.2.44 returns the web page for www.example.com to the web browser, and the web browser displays the page.

Cloud Stoarge

 

What is cloud storage?

Cloud storage is a cloud computing model that enables storing data and files on the internet through a cloud computing provider that you access either through the public internet or a dedicated private network connection. The provider securely stores, manages, and maintains the storage servers, infrastructure, and network to ensure you have access to the data when you need it at virtually unlimited scale, and with elastic capacity. Cloud storage removes the need to buy and manage your own data storage infrastructure, giving you agility, scalability, and durability, with any time, anywhere data access.

Why is cloud storage important?

Cloud storage delivers cost-effective, scalable storage. You no longer need to worry about running out of capacity, maintaining storage area networks (SANs), replacing failed devices, adding infrastructure to scale up with demand, or operating underutilized hardware when demand decreases. Cloud storage is elastic, meaning you scale up and down with demand and pay only for what you use. It is a way for organizations to save data securely online so that it can be accessed anytime from any location by those with permission.

Whether you are a small business or a large enterprise, cloud storage can deliver the agility, cost savings, security, and simplicity to focus on your core business growth. For small businesses, you no longer have to worry about devoting valuable resources to manage storage yourself, and cloud storage gives you the ability to scale as the business grows.

For large enterprises with billions of files and petabytes of data, you can rely on the scalability, durability, and cost savings of cloud storage to create centralized data lakes to make your data accessible to all who need it.

Cost effectiveness

With cloud storage, there is no hardware to purchase, no storage to provision, and no extra capital being used for business spikes. You can add or remove storage capacity on demand, quickly change performance and retention characteristics, and only pay for storage that you actually use. As data becomes infrequently and rarely accessed, you can even automatically move it to lower-cost storage, thus creating even more cost savings. By moving storage workloads from on premises to the cloud, you can reduce total cost of ownership by removing overprovisioning and the cost of maintaining storage infrastructure.

Increased agility

With cloud storage, resources are only a click away. You reduce the time to make those resources available to your organization from weeks to just minutes. This results in a dramatic increase in agility for your organization. Your staff is largely freed from the tasks of procurement, installation, administration, and maintenance. And because cloud storage integrates with a wide range of analytics tools, your staff can now extract more insights from your data to fuel innovation.

Faster deployment

When development teams are ready to begin, infrastructure should never slow them down. Cloud storage services allow IT to quickly deliver the exact amount of storage needed, whenever and wherever it's needed. Your developers can focus on solving complex application problems instead of having to manage storage systems.

Efficient data management

By using cloud storage lifecycle management policies, you can perform powerful information management tasks including automated tiering or locking down data in support of compliance requirements. You can also use cloud storage to create multi-region or global storage for your distributed teams by using tools such as replication. You can organize and manage your data in ways that support specific use cases, create cost efficiencies, enforce security, and meet compliance requirements.

Virtually unlimited scalability

Cloud storage delivers virtually unlimited storage capacity, allowing you to scale up as much and as quickly as you need. This removes the constraints of on-premises storage capacity. You can efficiently scale cloud storage up and down as required for analytics, data lakes, backups, or cloud native applications. Users can access storage from anywhere, at any time, without worrying about complex storage allocation processes, or waiting for new hardware.

Business continuity

Cloud storage providers store your data in highly secure data centers, protecting your data and ensuring business continuity. Cloud storage services are designed to handle concurrent device failure by quickly detecting and repairing any lost redundancy. You can further protect your data by using versioning and replication tools to more easily recover from both unintended user actions or application failures.

With cloud storage services, you can:

Cost-effectively protect data in the cloud without sacrificing performance.
Scale up your backup resources in minutes as data requirements change.
Protect backups with a data center and network architecture built for security-sensitive organizations.

How does cloud storage work?

Cloud storage is delivered by a cloud services provider that owns and operates data storage capacity by maintaining large datacenters in multiple locations around the world. Cloud storage providers manage capacity, security, and durability to make data accessible to your applications over the internet in a pay-as-you-go model. Typically, you connect to the storage cloud either through the internet or through a dedicated private connection, using a web portal, website, or a mobile app. When customers purchase cloud storage from a service provider, they turn over most aspects of the data storage to the vendor, including capacity, security, data availability, storage servers and computing resources, and network data delivery. Your applications access cloud storage through traditional storage protocols or directly using an application programming interface (API). The cloud storage provider might also offer services designed to help collect, manage, secure, and analyze data at a massive scale.

What are the types of cloud storage?

There are three main cloud storage types: object storage, file storage, and block storage. Each offers its own advantages and has its own use cases.

Object storage

Organizations have to store a massive and growing amount of unstructured data, such as photos, videos, machine learning (ML), sensor data, audio files, and other types of web content, and finding scalable, efficient, and affordable ways to store them can be a challenge. Object storage is a data storage architecture for large stores of unstructured data. Objects store data in the format it arrives in and makes it possible to customize metadata in ways that make the data easier to access and analyze. Instead of being organized in files or folder hierarchies, objects are kept in secure buckets that deliver virtually unlimited scalability. It is also less costly to store large data volumes.

Applications developed in the cloud often take advantage of the vast scalability and metadata characteristics of object storage. Object storage solutions are ideal for building modern applications from scratch that require scale and flexibility, and can also be used to import existing data stores for analytics, backup, or archive.

File storage

File-based storage or file storage is widely used among applications and stores data in a hierarchical folder and file format. This type of storage is often known as a network-attached storage (NAS) server with common file level protocols of Server Message Block (SMB) used in Windows instances and Network File System (NFS) found in Linux.

Block storage

Enterprise applications like databases or enterprise resource planning (ERP) systems often require dedicated, low-latency storage for each host. This is analogous to direct-attached storage (DAS) or a storage area network (SAN). In this case, you can use a cloud storage service that stores data in the form of blocks. Each block has its own unique identifier for quick storage and retrieval.

What cloud storage requirements should you consider?

Ensuring your company’s critical data is safe, secure, and available when needed is essential. There are several fundamental requirements when considering storing data in the cloud.

Durability and availability

Cloud storage simplifies and enhances traditional data center practices around data durability and availability. With cloud storage, data is redundantly stored on multiple devices across one or more data centers.

Security

With cloud storage, you control where your data is stored, who can access it, and what resources your organization is consuming at any given moment. Ideally, all data is encrypted, both at rest and in transit. Permissions and access controls should work just as well in the cloud as they do for on-premises storage.

What are cloud storage use cases?

Cloud storage has several use cases in application management, data management, and business continuity. Let’s consider some examples below.

Analytics and data lakes

Traditional on-premises storage solutions can be inconsistent in their cost, performance, and scalability — especially over time. Analytics demand large-scale, affordable, highly available, and secure storage pools that are commonly referred to as data lakes.

Data lakes built on object storage keep information in its native form and include rich metadata that allows selective extraction and use for analysis. Cloud-based data lakes can sit at the center of multiple kinds of data warehousing and processing, as well as big data and analytical engines, to help you accomplish your next project in less time and with more targeted relevance.

Backup and disaster recovery

Backup and disaster recovery are critical for data protection and accessibility, but keeping up with increasing capacity requirements can be a constant challenge. Cloud storage brings low cost, high durability, and extreme scale to data backup and recovery solutions. Embedded data management policies can automatically migrate data to lower-cost storage based on frequency or timing settings, and archival vaults can be created to help comply with legal or regulatory requirements. These benefits allow for tremendous scale possibilities within industries such as financial services, healthcare and life sciences, and media and entertainment that produce high volumes of unstructured data with long-term retention needs.

Software test and development

Software test and development environments often require separate, independent, and duplicate storage environments to be built out, managed, and decommissioned. In addition to the time required, the up-front capital costs required can be extensive.

Many of the largest and most valuable companies in the world create applications in record time by using the flexibility, performance, and low cost of cloud storage. Even the simplest static websites can be improved at low cost. IT professionals and developers are turning to pay-as-you-go storage options that remove management and scale headaches.

Cloud data migration

The availability, durability, and low cloud storage costs can be very compelling. On the other hand, IT personnel working with storage, backup, networking, security, and compliance administrators might have concerns about the realities of transferring large amounts of data to the cloud. For some, getting data into the cloud can be a challenge. Hybrid, edge, and data movement services meet you where you are in the physical world to help ease your data transfer to the cloud.

Compliance

Storing sensitive data in the cloud can raise concerns about regulation and compliance, especially if this data is currently stored in compliant storage systems. Cloud data compliance controls are designed to ensure that you can deploy and enforce comprehensive compliance controls on your data, helping you satisfy compliance requirements for virtually every regulatory agency around the globe. Often through a shared responsibility model, cloud vendors allow customers to manage risk effectively and efficiently in the IT environment, and provide assurance of effective risk management through compliance with established, widely recognized frameworks and programs.

Cloud-native application storage

Cloud-native applications use technologies like containerization and serverless to meet customer expectations in a fast-paced and flexible manner. These applications are typically made of small, loosely coupled, independent components called microservices that communicate internally by sharing data or state. Cloud storage services provide data management for such applications and provide solutions to ongoing data storage challenges in the cloud environment.

Archive

Enterprises today face significant challenges with exponential data growth. Machine learning (ML) and analytics give data more uses than ever before. Regulatory compliance requires long retention periods. Customers need to replace on-premises tape and disk archive infrastructure with solutions that provide enhanced data durability, immediate retrieval times, better security and compliance, and greater data accessibility for advanced analytics and business intelligence.

Hybrid cloud storage

Many organizations want to take advantage of the benefits of cloud storage, but have applications running on premises that require low-latency access to their data, or need rapid data transfer to the cloud. Hybrid cloud storage architectures connect your on-premises applications and systems to cloud storage to help you reduce costs, minimize management burden, and innovate with your data.

Database storage

Because block storage has high performance and is readily updatable, many organizations use it for transactional databases. With its limited metadata, block storage is able to deliver the ultra-low latency required for high-performance workloads and latency sensitive applications like databases.

Block storage allows developers to set up a robust, scalable, and highly efficient transactional database. As each block is a self-contained unit, the database performs optimally, even when the stored data grows.

ML and IoT

With cloud storage, you can process, store, and analyze data close to your applications and then copy data to the cloud for further analysis. With cloud storage, you can store data efficiently and cost-effectively while supporting ML, artificial intelligence (AI), and advanced analytics to gain insights and innovate for your business.

Is cloud storage secure?

Security is our number one priority at AWS. AWS pioneered cloud computing in 2006, creating cloud infrastructure that allows you to securely build and innovate faster. With AWS, you control where your data is stored, who can access it, and what resources your organization is consuming at any given moment. Fine-grain identity and access controls combined with continual monitoring for near real-time security information ensures that the right resources have the right access, wherever your information is stored. On AWS, you will gain the control and confidence you need to securely run your business with the most flexible and secure cloud computing environment available. As a result, the most highly regulated organizations in the world trust AWS, every day.

Learn more about AWS cloud security.

How can AWS help with your cloud storage needs?

AWS is the most secure, extensive, and reliable cloud platform, offering over 200 fully featured services from data centers globally. Whether you need to deploy your application workloads globally in a single click, or you want to build and deploy specific applications closer to your end users with single-digit millisecond latency, AWS provides you with the cloud infrastructure where and when you need it.

Amazon Simple Storage Service (Amazon S3) redundantly stores objects on multiple devices across a minimum of three Availability Zones (clusters of discrete data centers with redundant power, networking, and connectivity in an AWS Region).
Amazon FSx and Amazon Elastic File System (Amazon EFS) provide shared file access for applications with unstructured data such as video and medical images, web and rich media content, or user directories or large data sets.
Block-based cloud storage solutions like Amazon Elastic Block Store (Amazon EBS) are provisioned with each Amazon Elastic Compute Cloud (Amazon EC2) compute instance and deliver the ultra-low latency required for high-performance workloads.

Resources:

Learn more about data lake storage on AWS
Build scalable, durable, and secure data protection solutions on AWS
Discover how to migrate to the cloud with confidence on AWS
Perform your compliance responsibilities on AWS
Store your cloud-native applications on AWS
Modernize data archiving on AWS
Maximize the potential of the hybrid cloud on AWS
Create IoT solutions on AWS
Innovate faster with machine learning (ML) on AWS

Get started with cloud storage by creating an AWS account today.

AWS Identity And Access Management

 

AWS Identity And Access Management

  • IAM is preventative security control.
  • It can create and manage AWS users and groups and use permissions to allow and deny access to AWS resources
  • IAM deals with 4 terms such as users, groups, Roles, and Policies.
  • It controls both centralized and fine grained-API resources plus a management console.
  • You can specify permissions to control which operations a user or role can perform on AWS resources
  • IAM service provides access to the AWS Management Console, AWS API, and AWS Command-Line Interface (CLI)

Schema of IAM

Also read: This post covers the AWS Free Tier Account Overview. Amazon Web Services (AWS) is providing 12 months of Free Tier account to new subscribers to get hands-on experience with all the AWS cloud services.

AWS IAM — Key Features

We should consider IAM the initial move towards making sure about all your AWS administrations and assets.

1) Confirmation: AWS IAM allows you to make and oversee characters, for example, clients, gatherings, and jobs, which means you can issue and empower verification for assets, individuals, administrations, and applications inside your AWS account.

2) Approval: Access the executives or approval in IAM is made of two essential segments: Policies and Permissions.

3) Fine-grained consents: Consider this — you need to give the business group in your association admittance to charging data, yet in addition need to permit the engineering group full admittance to the EC2 administration, and the promoting group admittance to chose S3 pails. Utilizing IAM, you can design and tune these consents according to the necessities of your clients.

4) Common admittance to AWS accounts: Most associations have more than one AWS account, and now and again need to designate access between them. IAM lets you do this without sharing your accreditations and all the more as of late, AWS delivered ControlTower to additionally streamline multi-account designs.

5) AWS Organizations: For fine-grained control for various AWS accounts, you can utilize AWS Organizations to portion accounts into gatherings and allot consent limits.

6) Personality Federation: Many occasions, your association should combine access from other character suppliers, for example, Okta, G Suite, or Active Directory. IAM empowers you to do this with an element called Identity Federation.

Also read: Our Blog Post on AWS SNS

IAM Components

IAM Components

IAM Users:

  • IAM users can be an individual, system, or application requiring access to AWS services
  • A user account consists of a unique name and security credentials such as a password, access key, and/or multi-factor authentication (MFA)
  • IAM users only need passwords when they access the AWS Management Console

Check Also: Free AWS Training and Certifications

IAM Groups:

  • IAM Groups are a way to assign permissions to logical and functional units of your organization
  • IAM groups are a tool to help with operational efficiency, Bulk permissions management (scalable) and easy to change permissions as individuals change teams (portable)
  • A group can contain many users, and a user can belong to multiple groups.
  • Groups can’t be nested; they can contain only users, not other groups.

IAM Policies:

  • IAM policies are JSON-based statements that define access control and permissions.
  • IAM policies can be “inline” or “managed” and can be attached to a user or a group
  • Inline policies – policies that you create and manage, and that are embedded directly into a single user, group, or role.
  • Managed policies – standalone policies that you can manage separately from the IAM users, groups, or roles to which they are attached.

Element Of IAM policy

IAM Roles:

  • An IAM role is like a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS.
  • You can authorize roles to be assumed by humans, Amazon EC2 instances, custom code, or other AWS services for specific access to services.
  • Roles do not have standard long-term credentials such as a password or access keys associated with it, instead, when you assume a role, it provides you with temporary security credentials for your role session.

Also Check : Our Blog post on AWS Secrets Manager

AWS IAM Access Analyzer 

If you have two or more AWS accounts, do your self a favor and start using the IAM access analyzer for your organizational security. The access analyzer gives you all the AWS resources which are exposed outside of your AWS organization.

  • IAM Access Analyzer continuously monitors resource policies for changes, eliminating the need to rely on intermittent manual checks in order to catch issues as policies are added or updated.
  • It helps you to create an extensive report for all your AWS assets that could be accessed publically by utilizing Access Analyzer,
  • Access Analyzer is a piece of Amazon’s Provable Security endeavors to accomplish the most significant levels of security utilizing mechanized thinking innovation and scientific rationale.

How it works

AWS IAM Access Analyzer

IAM Access Analyzer is in accordance with the general ethos of AWS IAM administration, which means it includes no extra expense and is incorporated as a feature of the IAM support.