Tuesday, 6 February 2018

How to mount S3 bucket in Linux server

How to mount S3 bucket in Linux server


Learn to mount S3 bucket in RHEL, ubuntu, CentOS Linux server. Understand how to debug issues while mounting S3 bucket.
Mount S3 bucket in Linux
Mount S3 bucket in Linux

In this article we will walk through process by which you can mount S3 bucket on Linux server. S3 bucket is storage container of S3 (simple storage service) AWS service. As all traditional datacenters are moving to cloud computing, it is necessary to know how to interconnect cloud and traditional services. Lets dive into process to mount S3 bucket in RHEL, ubuntu, CentOS Linux. Complete process can be done in below 3 steps:
  1. Install fuse and s3fs packages
  2. Configure access keys of your AWS account
  3. Mount S3 bucket
For this tutorial, we are assuming you have S3 bucket ready in your AWS account with proper permission setup. If not, follow this tutorial to create S3 bucket in AWS.

Install fuse and s3fs packages

These packages can be found here : fuse s3fs. You have to download them on Linux server  with tools like wget and compile them. Make sure you have their dependencies installed before you try to compile them.
Dependencies are :
for RedHat based : automake gcc gcc-c++ git libcurl-devel libxml2-devel make openssl-devel mailcap curl-devel libstdc++-devel
for Debian based : automake autotools-dev g++ git libcurl4-gnutls-dev libfuse-dev libssl-dev libxml2-dev make pkg-config
Install all these packages and follow below steps to configure fuse and s3fs. Make sure there is no package named fuse of s3fs exist in server before you proceed. This is to avoid conflicts in installation.
Fuse installation –
Use below commands. Use latest Fuse download link used in wget command. You can obtain it from Github here.
Fuse installation output is below for your reference. Click and expand if you want to view it.
s3fs installation-
Use below commands. We are cloning git here so no need to check latest release from GitHub website.
s3fs installation output is below for your reference. Click and expand if you want to view it.

Configure access keys of your AWS account

Now you need to configure your AWS account security keys in s3fs utility. For that you need to visit your AWS account’s IAM (Identity and Access Management) page and get those keys under ‘Access Keys (Access Key ID and Secret Access Key)‘. If you havnt created any till time, then you can create new key pair and use it.
There are two keys : Access Key ID and Secret Access Key. Add those keys in file /etc/s3fs-keys separated by colon. You can keep them in any file name and path of your choice. I prefer to keep it in /etc where other OS imp files resides.
Remove others permission to read this key file. If you dont set permission 0 to others then s3fs command utility will warn you about it like below . For extra security you can make this file hidden by adding . in filename.
Now you are ready to mount your bucket.

Mount S3 bucket

Now run s3fs utility with bucket name you want to mount followed by directory on which you want to mount it. -o switch to be used to specify key file path. If you used another filename and path than /etc/s3fs-keys then use it accordingly. There are many other options which can be supplied with this command to control cache and permission on Linux server (OS side) which I havnt mentioned here.
Here –
  • kerneltalks.bucket is my bucket name in S3
  • /my_s3_bucket is directory on server on which I mounted S3 bucket
  • passwd_file is path where I kept my AWS account keys
Your bucket is mounted on your server! You can check it in df output. You will see filesystem as s3fs.
You can perform all file and directory operations like you do on normal mount point. Observe size it showing – 256T ! Thats huge, thats S3 – almost unlimited storage!
You can add entry in /etc/fstab to mount your bucket on boot too. Use below entry :


Bonus tip :
If your bucket name includes . then s3fs may fail to mount your bucket. Curl will exist the mount operation as SSL on Amazon wont be matched as certificate wild card matching fails due to extra . in your bucket name. To understand, if I have bucket name kerneltalks.bucket then SSL wildcard mismatch will happen as indicated by below error –
SSL: certificate subject name (*.s3.amazonaws.com) does not match target host name ‘kerneltalks.bucket.s3.amazonaws.com’
To see errors, warning during mount operation you can run s3fs command with debug on. Append below switches to the end of your s3fs mount command:
This will help you troubleshooting any issues you faced during S3 bucket mount.
This is how we mount S3 bucket in Linux servers like Redhat, ubuntu, Debian etc. Let us know if you have any feedback/comments/suggestions in below comment box.

Complete AWS CSA Associate exam preparation guide!

Complete AWS CSA Associate exam preparation guide!



AWS CSA Associate exam preparation guide
AWS CSA Associate exam preparation guide
Small AWS CSA Associate exam preparation guide to help you get ready for certification exam. Get confident with list of test quiz listed here.


Recently I cleared Amazon Web Services certified Solutions Architect Associate level exam and I was bombarded with many questions like : How to prepare for AWS CSA exam? Which book to refer for preparing AWS CSA certification? How to study for AWS CSA? Which online resources available for certified solutions architect exam? So I though of summing all this up in a small post which can be useful for AWS CSA aspirants.
Remember this post is compiled from my own experience and should not be taken as final benchmark for taking certification exam. This post is mainly aims to help you gaining confidence for taking examination once you are through your syllabus and hands on experience.
AWS has three streams where you can pursue your cloud career.
  • AWS Certified Solutions Architect (Architecture context)
  • AWS Certified Developer (Developer context)
  • AWS Certified SysOps Administrator (Operations context)
All these three streams has associate level (primary or base) level certification. Later professional (higher level) certification is available for solution architect only. Developer and SysOps gets merged into single AWS certified DevOps Engineer professional certification.
So, we are talking here about Amazon Web Services certified Solutions Architect Associate level exam! Obviously you should be well versed with AWS and requirements stated by Amazon on exam link. Lets have some examination details :

AWS CSA Exam details :

  • Total number of questions : 60
  • Duration : 80 minutes
  • Cost : $150
  • Type : Multiple choice questions
  • Can be retaken after 7 days of cooldown period if failed in first attempt
  • Syllabus  : Download here.
  • Schedule your exam here.
  • Pass criteria : Not known exactly but should be 70% or more

AWS CSA Study material :

  • Instructor lead trainings are available. Search for such institutes in your area on Google.
  • Online trainings are available on top websites like A cloud GuruCloud AcademyLinux Academy.
  • AWS CSA official guide here. Do not miss exam essentials and review questions at end of each lesson.

Quick recap before exam :

I have compiled series of quick review before taking exam. Feel free to refer and suggest your addition/feedback.
There are flashcards available on Linux Academy under AWS courses which will help you testing your knowledge at end of the course or module.

Below are list of AWS quiz which I gathered from web which can help you to put your cloud knowledge to test and gain confidence to get ready for exam.

Free Quiz

Premium (paid) Quiz
  • Cloud academy : 241 Questions. Signup needed (first 7 days free access then paid account)
  • Linux Academy : 117 Questions. Signup needed (first 7 days free access then paid account)
  • A Cloud Guru : 294 Questions. Signup needed.

How to create S3 bucket in AWS

Create S3 bucket in AWS
S3 bucket creation in AWS

S3 which stands for Simple Storage Service is storage web service provided by Amazon web service. S3 is replacement of storage boxes in traditional datacenters. Its highly scale-able, cheap, reliable alternative. In S3 data is stored in bucket. Bucket is root folder in S3. You can have more than one bucket in single AWS account. Files stored in buckets are called objects. You can control access to data by defining permissions at bucket level and object level.
In this article we will see how to create S3 bucket with screenshots.

Step 1 –

Login to AWS console and select S3 under Storage. You can even search it under search bar of console.
S3 in AWS console
S3 in AWS console
It will take you to Amazon S3 console where you can see ‘create bucket‘ button along with Delete bucketand Empty bucket button in header.
S3 console
S3 console

Step 2 –

Click create bucket and you will be presented with bucket wizard. Enter bucket name of your choice. Remember it should be unique across all AWS infrastructure. Select region (geographically nearest to source/destination from where read/write of data will happen to/from this bucket to avoid latency). If you want to create new nucket with setting of existing bucket, you can specify existing bucket name in last option.
Create bucket wizard
Create bucket wizard
Hit next after filling required details. You will enter bucket properties screen as below.
S3 Bucket properties
S3 Bucket properties
Here you can set these properties to your bucket.
  1. Versioning. Enable to keep all versions of objects when altered. Once enabled it can not be disabled. It can only be suspended then.
  2. Logging. It will track all access requests made to this bucket.
  3. Tags. Add tags of your choice to identify bucket easily in other AWS services and billing.
All are disabled by default. Once you enable features of your choice hit next. You will enter permission settings screen.
S3 bucket permissions
S3 bucket permissions
Here you can manage permissions at user, public and system level. Public and system permissions can be enabled or disabled. User level permission can be set read, write and user id wise.
Once you are done, hit next and review screen will show you all the options you have selected as a final confirmation before creating bucket.
Bucket review screen before creation
Bucket review screen before creation
Hit create bucket now. Your bucket will be created and you will be redirected back to bucket list screen where you can see your newly created bucket!

How to : Virtual Private Cloud in AWS

How to guide for Virtual Private Cloud in AWS. Learn what is vpc, how to create, configure and delete VPC in AWS with screenshots.
How to guide : Virtual Private Cloud in AWS
How to guide : VPC in AWS

What is VPC?

VPC is Virtual Private Cloud. Its a your own private cloud in public cloud. You control every aspect of VPC and its communication with outer world. Its like having your own datacenter which is isolated from other datacenters. When you are using cloud services, you are working inside your VPC. Servers, storage, load balancer, databases everything you create, configure is executed under your VPC. VPC gives you great flexibility to control your data privacy and security even its on cloud.

How to create VPC in AWS?

We will walk through process of creating VPC in AWS (Amazon Web Services) cloud. By default, one VPC is created for you when you create a new account with AWS. This VPC is marked as default VPC. Whenever you are using services within AWS, this VPC will be used by default if multiple VPC exists in your account.
Check out our AWS CSA associate certificate preparation guide
Lets follow these series of screenshots to create VPC.
First login to your AWS management console and navigate to VPC under category ‘Networking and Content delivery‘. See below image. Or you can type VPC in AWS services search bar you will be presented with VPC link.
VPC in AWS management console
VPC in AWS management console
Now you will be presented with VPC dashboard which shows you summary of your VPC resources like below :
VPC dashboard
VPC dashboard showing resources details
Here click on ‘Start VPC Wizard‘. This will kick off VPC wizard to create your VPC step by step.

Step 1 :

Choose which kind of VPC you need. You have these choices –
  1. VPC with single public subnet
  2. VPC with public and private subnets
  3. VPC with public and private subnets with hardware VPN access
  4. VPC with private subnet only with hardware VPN access
Each choice has their own features to offer. You can see what it offers by clicking on it. We will be creating first type of VPC in this tutorial.
Select type of VPC
Select type of VPC
Select your type of VPC on left column and then click on Select blue button on right.

Step 2:

Here you need to configure your subnet IP ranges, hardware related stuff etc. See below screenshot and we will understand each field one by one.
VPC configuration
VPC configuration
  • IPv4 CIDR block : CIDR is Class-less Inter Domain Routing. It is your subnet range to be used by VPC. IP address from this range will be assigned to components or services you will be using in this VPC. This is mandatory field. You have to specify your range with subnet notation. Note that this range is configured and reachable only within your VPC.
  • IPv6 CIDR block : Optional field. You can have IPv6 support in your VPC with this. Here IP range will be automatically generated and assigned by Amazon. You do not have privilege to choose your own.
  • VPC Name : Name of your choice. It helps you to identify this VPC in other parts of AWS within your account for configuration purpose. You can leave this blank since AWS identify its each component by ARN (Amazon resource name). This ARN is alphanumeric system generated name which is not user friendly hence this field is optionally provided so that you can name your components with easily recognizable name.
  • Public subnet’s IPv4 CIDR : This range is meant for outside world communication. Your resources will be assigned IP from this block when you want them to communicate outside VPC.
  • Availability zone : These zones are logical grouping of AWS hardware within one specified region (geographical grouping). At a one time you can select one region to work within and availability zones from that region will be listed here as a drop down. If no zone selected, AWS will create VPC in any of the zone which has max free resources at that instant of time.
  • Subnet name : Again this one is to name your public subnet with easily recognizable name.
  • Service endpoints : These are virtual devices in AWS. If you want any of them to add with this VPC then you can browse and select them here.
  • Enable DNS hostnames : It enables DNS names to be generated for components when they created in this VPC. These names are system generated.
  • Hardware Tenancy  : Choose if you want your VPC components to be on a single dedicated hardware (dedicated , physically as close as possible) or anywhere (physically may be near or long) within zone you specified above. Dedicated tenancy assigns hardware which is same rack or nearby racks so that you have very minimum network latency and highest performance.
Step 3 :
Click ‘Create VPC ‘ button. You VPC will be created within seconds and you will be greeted with screen saying “Your VPC has been successfully created. You can launch instances into the subnets of your VPC. For more information, see Launching an Instance into Your Subnet.” (link altered here with my blog post link). Click ok and you will be presented with VPC list screen as below :
VPC list
VPC list
Here you can see out newly created VPC named kerneltalks_vpc ! All details of this VPC can be seen here. You VPC is ready to

How to modify VPC in AWS?

After creation you can modify VPC parameters. From the VPC list shown above, select any VPC you want to edit and then click Actions button in header. Drop down menu will apeear to edit below parameters :
  • Delete VPC
  • Edit CIDRs
  • Edit DHCP options set
  • Edit DNS resolution
  • Edit DNS hostnames
  • Create flow log
Flow logs are created fro any resources in VPC to trace and see IP traffic flow information. Rest of the options are self explanatory. Here you can modify VPc and delete VPC too.