Tuesday, 16 April 2024

The goal of this lab is to illustrate how to create and share an NFS file system using AWS EFS service.

 

Architecture Diagram

lab-018-arch-01

Overview

Create two EC2 instances in different AZs (you can use public subnets). Create an NFS file system using EFS and mount it using one of the EC2 instances. Create some files for testing purposes. Then try to mount and access the file system from the other EC2 instance. Note that the access should work simultaneously.

Step 1 - VPC Setup

Make sure your VPC is configured to enable mounting using DNS names. You can do that by going to your VPC's action menu and selecting Edit DNS Resolution and Edit DNS Hostnames. Make sure both are set to Yes.

Step 2 - Create Security Group

Create a security group named nfs-access to allow access to the NFS file system to members of the security group.

First Create the security group. lab-018-scrn-01

Then edit its inbound rule to allow access from members of the group. lab-018-scrn-02

Step 3 - Launch EC2 Instances

Launch two EC2 instances, each on its own AZ. Use the user-data.sh to install Amazon's EFS utility package. Make sure your instances are members of the nfs-access security group and are also accessed via ssh.

Step 4 - Create an NFS File System

Go to Storage - EFS and click Create file sytem.

lab-018-scrn-03

Copy the EFS file system ID.

lab-018-scrn-04

Step 5 - Mount the NFS File System

Access one of the EC2 instances using ssh. Create a folder to be the mounting point (let's say data). Then using your EFS file system ID (mine was fs-9ee0b987) issue the command:

sudo mount -t efs fs-9ee0b987:/ data

You should be able to access the file system. Create a few testing files.

This lab's goal is similar to lab-016 but because we want to move a volume from one AZ to another we will have to first create a snapshot of the volume and then create a new volume from the snapshot.

 

Architecture Diagram

lab-017-arch-01

Overview

Create two EC2 instances in different AZs. Create an EBS volume and attach it to one of them. Create some files in the volume and then terminate the the EC2 that had the attached volume. Then follow the steps.

Step 1 - Create an Snapshot of an EBS Volume

Go to Elastic Block Store - Snapshots.

lab-017-scrn-01 lab-017-scrn-02

Verify that your snapshot was successfully created at Snapshots. You might want to copy the Snapshot ID to use it on the next step. Once your snapshot is created you can delete the EBS volume.

lab-017-scrn-03

Step 2 - Create an EBS Volume from a Snapshot

Create a new volume from the snapshot that you created in the previous step. Make sure your volume is created on the other AZ.

lab-017-scrn-04

Step 3 - Attach EBS Volume

Attach the EBS volume created from the snapshot to the EC2 instance running on the other AZ.


The goal of this lab is to illustrate how to move an EBS volume from one EC2 instance to another.

 

Architecture Diagram

lab-016-arch-01

Overview

Step 1 - Create an EBS Volume

During the Add Storage step of Launch Instance, add a new EBS volume.

lab-016-scrn-01 lab-016-scrn-02

Step 2 - Mount EBS Volume

Connect to the EC2 instance via ssh and run lsblk to learn your EBS device's name. For example, the output I got from lsblk was:

NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk
└─xvda1 202:1    0   8G  0 part /
xvdb    202:16   0   8G  0 disk

The device /dev/xvda is the boot volume, mounted at the root level. The device /dev/xvdb must be the EBS volume that we created. Run the following commands to format and mount the disk:

sudo mkfs.ext4 /dev/xvdb
mkdir data
sudo mount /dev/xvdb data

Then use the following commands to create some files in your new disk:

sudo touch data/test1.txt
sudo touch data/test2.txt
sudo touch data/test3.txt

Step 3 - Terminate the EC2 Instance

This step illustrates the fact that when you terminate an EC2 instance, non-boot volumes are not deleted by default. Go to Elastic Block Store - Volumes and verify that your EBS volume is still available even after the EC2 instance is terminated.

lab-016-scrn-03

Step 4 - Attach EBS Volume

Launch a new EC2 instance but this time do not create a new EBS volume. Instead, after your instance is launched, attach the EBS volume created earlier to the EC2 instance.

lab-016-scrn-04 lab-016-scrn-05

This lab is similar to lab-013. However, we will be creating a resource-based policy to only allow access to the bucket if requests are sent from a specific IP address.

 

Overview

Repeat steps 1 and 2 of lab-013, changing the name of the bucket to lab015-bucket.

Step 1 - Create a Resource-based Policy

Because the policy we want to create is based on the IP of the computer we will be using to access the s3 bucket, use whatismyip.com to determine what is your computer's current IP.

An easy way to create a policy is to use the AWS Policy Generator tool. Use this policy as an example. Don't forget to replace YOUR IP with your client computer's IP address.

Step 2 - Apply Policy to Bucket

lab-015-scrn-01

Test and Validation

Verify that with the policy you are able to have full access to the s3 bucket. Try changing the IP address of the policy (or accessing it from a computer with a different public IP) and verify that access is denied after that.

In order to be able to regain access to the bucket you need to use the root credential to remove the policy. Use the following command to do so (must use root's credentials):

The goal of this lab is to illustrate how to access an S3 bucket from an EC2 instance in a private subnet using an S3 gateway endpoint.

 

Architecture Diagram

lab-014-arch-01

Overview

The architecture diagram illustrates the steps we will have to take to demonstrate how the EC2 instance in the private subnet can still access an S3 bucket even if the instance does not have access to the internet.

  1. From the client you can connect to a bastion host configured in the public subnet (see lab-002),
  2. From the bastion host you can then connect to the EC2 instance in the private subnet,
  3. From the EC2 instance you can use the S3 gateway endpoint, to ...
  4. Access the S3 bucket.

Step 1 - Initial Setup

Follow the steps described in lab-002, making sure you change all references to lab-002 with lab-014.

Step 2 - Create an S3 Bucket

Name your S3 bucket lab014-bucket. Refer to lab-013 if not sure how to do this step.

Step 3 - Create an S3 Gateway Endpoint

Go to VPC - Endpoints and click on Create Endpoint.

lab-014-scrn-01 lab-014-scrn-02 lab-014-scrn-03 lab-014-scrn-04

The goal of this lab is to illustrate how to create an S3 bucket and access it from a client computer via AWS CLI (command-line interface).

 

Architecture Diagram

lab-013-arch-01

Overview

Amazon S3 (Simple Storage Service) is an object-based storage service. An object is a file with attributes like name, value, access control info, etc. Access to objects in S3 is done through an API. Amazon S3 use containers called buckets to store objects. Bucket names are globally unique.

Step 1 - Create an S3 Bucket

After naming your bucket you can accept all of the default options.

lab-013-scrn-01 lab-013-scrn-02

Step 2 - Download and Install AWS CLI

Available here.

Step 3 - Create and Download an Access Key

Go to IAM - Users and then select your user. Then go to Security credentials and create an access key. Download the key you created (a csv file) and use it on the next step.

Step 4 - Configure AWS CLI

Open a terminal and configure your access key and default region on AWS CLI using:

aws configure

When prompted, enter your access key ID, access key secrete, and region (short) name.