Tuesday 3 January 2023

AWS Database Migration Service

 

What is AWS Database Migration Service?

AWS Database Migration Service (DMS) is a managed and automated service that provides a quick and secure way to migrate database from on-premise databases, DB instance, or database running on EC2 instance to the cloud. It helps you modernize, migrate, and manage your environments in the AWS cloud. Amazon provides a wide spectrum of databases to work with such as Amazon RDS, Aurora, DynamoDB, ElasticCache, and Redshift.

Check out this blog to know more in detail about AWS Database Service – Amazon RDS, Aurora, DynamoDB, ElastiCache

Source-Target-DynamoDB

DMS can be used to migrate relational databases, data warehouses, NoSQL databases, and other types of databases into the cloud. AWS DMS supports homogeneous (i.e. Oracle to Oracle) and heterogeneous (i.e. Oracle to Amazon Aurora) database migration. During migration the source database remains operational, thereby minimizing the downtime. The entire migration process can be controlled from the AWS Management Console.

Learn With Us: Join our  AWS Solution Architect Training and understand AWS basics in an easy way.

AWS DMS Benefits

AWS Database Migration Service has various benefits over traditional migration methods such as:

dms benefits

  • Minimal downtime – DMS continuously replicates the changes to your source database during migration while keeping your source database operational. This allows you to switch over at any time without shutting down either database.
  • Supports Widely Used Databases – AWS Database Migration Service can migrate your data to and from most of the widely used commercial and open-source databases.
  • Fast & Easy Setup – A migration task can be set up within a few minutes in the AWS Management Console.
  • Low cost – DMS is a free migration service for migration to Aurora, Redshift, DynamoDB, or DocumentDB. For other databases, you have to pay based on the amount of log storage and computational power needed to transfer.
  • Reliability – DMS is a self-healing service and will automatically restart in case of an interruption occurs. DMS provides an option of setting up a Multi-AZ (availability zone) replication for disaster recovery.

How does AWS DMS work?

AWS Database Migration Service (DMS) is a managed and automated migration service that allows you to migrate your data from one database to another. The process starts by first connecting DMS to the endpoints; source, and target endpoints. The only requirement to use AWS DMS is that one of your endpoints must be on an AWS service.

AWS Database migration starts by first connecting to your source database, the service then reads the data, formats the data according to the target database. It then loads the data into the target database. It goes through a full load migration where the source data is moved to the target. During full load, any changes made to the tables being loaded are cached on the replication server; these are the cached changes. Once the full load completes, AWS DMS immediately begins to apply the cached changes to the database to keep both the source and target database in sync with each other.

aws migration components

Components of AWS Database Migration Service

AWS DMS migration consists of three components that you should know about before starting with migration:

  • Replication instance
  • Source & Target Endpoints
  • Replication tasks

database migration components

Replication Instance

A replication instance is simply a managed Amazon Elastic Compute Cloud (EC2) instance that is used to host one or more replication tasks. The above image shows a replication instance running several associated replication tasks.

Endpoints

DMS uses an endpoint to connect to your source and target databases. When you create an endpoint, you require the following information:

  • Endpoint type
  • Engine type
  • Server name
  • Port number
  • Encryption protocols
  • Credentials

You can create an endpoint using AWS DMS console, where you test the endpoint connection that verifies whether the database exists at the given server name and port, and the supplied credentials can be used to connect to the database.

Replication Tasks

A replication task is used to move data from the source endpoint to the target endpoint, this is where you specify what tables and schemas are moved between your source and target databases and when. Creating a replication task is the last step you need before you start a migration.

When you create a replication task, you need to specify which replication instance to use, your target and source endpoints, and your migration type option.

AWS Schema Conversion Tool

For a homogenous migration, DMS attempts to create a target schema. However, it is not always possible. In these cases, we use tools such as MySQL Workbench, Oracle SQL Developer, and pgAdmin III.

For heterogeneous migrations, AWS Database Migration Service is not able to perform schema conversions. In these cases, you can use AWS Schema Conversion Tool (SCT). SCT automatically converts source schema to a compatible format for your target database.  It can create a target schema and also can generate and create an entire schema, with tables, indexes, views, and so on.

Use Cases

DMS supports migration to Amazon RDS, Aurora, Redshift, DynamoDB, and DocumentDB. There are several use cases of AWS DMS, some of them are listed below:

1. Homogeneous Database Migration

Homogeneous Database Migration is when the source and target databases are the same or compatible with each other such as Oracle to Amazon RDS for Oracle, MySQL to Amazon Aurora, MySQL to Amazon RDS for MySQL, or Microsoft SQL Server to Amazon RDS for SQL Server. Since the schema structure and data types of source and target database are compatible, it is a one-step process as there is no need for schema conversion.

homogeneous database migration

2. Heterogeneous Database Migration

Heterogeneous Database Migration is when the source and target database engines are different from each other such as Oracle to Amazon Aurora, Oracle to PostgreSQL, or Microsoft SQL Server to MySQL migrations. In this case, the schema structure and data types of source and target databases are different from each other, this requires a schema and code transformation before migration, which makes it a two-step process.

heterogeneous-database-migrations

Migrating an On-Premises Oracle Database to Amazon Aurora MySQL

In this section, we will look at the step-by-step process for migrating an on-premises Oracle database (the source endpoint) to an Amazon Aurora with MySQL compatibility (the target endpoint) using AWS Database Migration Service (AWS DMS).

Before starting, you must have an AWS cloud account, if you don’t know how to create one, read our blog on How To Create AWS Free Tier Account.

Step 1: Configure Your Source Oracle Database

1. Run the following command to enable supplemental logging at the database level for AWS DMS

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

2. If you are using an account with the minimal privileges required by AWS DMS, then you need to make sure it has the required privileges.

Step 2: Configure Your Target Aurora MySQL Database

If you want to create a temporary dms_user with the minimal privileges required for migration, then follow the steps in official AWS docs.

Step 3: Create a Replication Instance

1. Sign in to the AWS Management Console, and open the AWS DMS console and choose Replication instances.

2. Click on Create replication instance.

aws replication instance

3. On the Create replication instance page, enter the required details. Once done, click Create.

create replication instance

Step 4: Create Oracle Source Endpoint

1. Go to the AWS DMS console, choose Endpoints. Select Create Endpoint.

aws endpoints

2. On the Create database endpoint page, enter the required details and create the source endpoint.

create endpoint

Step 5: Create Aurora MySQL Target Endpoint

Create a target endpoint in a similar way that you created the source endpoint. Select Target endpoint on the create endpoint details page, select Aurora MySQL as your source engine, enter the database details for Aurora database, and create the endpoint.

Step 6: Create a Migration Task

1. Go to the AWS DMS console, choose Database Migration Tasks, click Create Task.

aws migration tasks

2. On the Create Task page, select the replication instance, source, and target endpoints that we created in previous steps and enter the other required details.

create dms task

Conclusion

I hope that by now you have a better understanding of AWS Database Migration Service, its benefits, components, and working. This should help you overcome the complex challenges of database migration to AWS.

Amazon Kinesis

 

What is Amazon Kinesis?

Amazon Kinesis is an important feature of Amazon Web Services (AWS) that easily gathers or collects, processes, and analyzes video and data streams in a real-time environment. Key offerings: This enables you to gain quick timely insights as well as reaction to new information instantly.
AWS Kinesis structure

Using Amazon Kinesis, real-time data can be ingested, such as audio, video, website clickstreams, application logs, and IoT telemetry data for artificial intelligence, machine learning, and other analytics applications. Amazon Kinesis also assists with processing and analyzing data as it reaches and responds instantly without having to wait for the entire collection of data so that the processing could begin.

Also read: AWS Trusted Advisor is your personal cloud expert!

Amazon Kinesis Capabilities

There are four major capabilities of Amazon Kinesis, let’s discuss them one by one

Amazon Kinesis Video Stream

Amazon Kinesis Video streamsAmazon Kinesis Video Streams helps users to securely stream videos from devices connected to AWS for machine learning (ML), analytics, playback, and other processing. Kinesis Video Streams provisions and elastically scales all the infrastructure that is needed to ingest streaming video data from multiple devices automatically. Kinesis Video Streams durably encrypts, stores, and indexes video data in streams and allows to access the data through easy-to-use APIs. Amazon Kinesis video streams help to process, store media, and capture streams for analytics, machine learning, and playback.

Know More : AWS Database Services.

Benefits offered by AWS Kinesis Video Stream:

  1. Stream video from millions of devices: It provides SDKs that make it easy for devices to securely stream media to AWS for playback, storage, analytics, machine learning, and other processing.
  2. Build a real-time vision and video-enabled apps: It can easily build applications with real-time computer vision capabilities through integration with Amazon Rekognition Video, and with real-time video analytics capabilities using popular open-source machine learning frameworks.
  3. Playback live and recorded video streams: Users can easily stream live and recorded media from your Kinesis video streams to your browser or mobile application using the Kinesis Video Streams HTTP Live Streaming (HLS) capability.
  4. Durable, searchable storage: It uses Amazon S3 as the underlying data store, which means your data is stored durably and reliably. Kinesis Video Streams enables you to quickly search and retrieve video fragments based on device and service generated timestamps.

Read about: Amazon Elastic Load Balancing (ELB).

Amazon Kinesis Data Streams

Amazon Kinesis Data StreamsAmazon Kinesis Streams are used to gather together and process huge streams of data records in real-time. AWS Kinesis Data Stream Applications can be created, which are data-processing applications. These applications perform the reading from a data stream in the form of data records. They use Kinesis Client Library for these operations and can run on Amazon EC2 instances. Processed records can be sent to AWS dashboards and can be used to generate alerts, send data to other AWS services, and dynamically change advertising and pricing strategies.

Also read: AWS Elastic Beanstalk is an easy-to-use AWS service for deploying and scaling web applications and services.

Benefits offered by AWS Kinesis Data Streams

  1. Durability: It ensures minimal data loss along with synchronous duplication of streaming data across all the Availability Zones in the AWS Region.
  2. Security: Sensitive data can be encrypted within KDS so that you can access your data privately through Amazon Virtual Private Cloud (VPC).
  3. Easy to use and low cost: The components like connectors, agents, Kinesis Client Library (KLC), etc can help you build streaming applications quickly and effectively. There is no upfront cost for Kinesis Data Streams. You only have to pay for the resources you use.
  4. Elasticity and Real-Time Performance: According to SNDK Corp, you can easily dynamically scale your applications from gigabytes to terabytes of data per hour adjusting the throughput. Real-Time Analytics Applications can be supplied with real-time streaming data within a very short time of the data being collected.

Amazon Kinesis Data Firehose

Amazon-Kinesis_Data_FirehoseAmazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to services like Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elasticsearch Service (Amazon ES), Splunk, and any custom HTTP endpoint or HTTP endpoints owned by supported third-party service users, including Datadog, MongoDB, and New Relic. Kinesis Data Firehose is a component of the Kinesis streaming data platform, in conjunction with Kinesis Data Streams, Kinesis Video Streams, and Amazon Kinesis Data Analytics. With Kinesis Data Firehose, you do not get to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you simply specified. You can also configure Kinesis Data Firehose to rework your data before delivering it.

Check Also: Free AWS Training and Certifications

Benefits offered by AWS Kinesis Data Firehose

  1. Integrated with AWS services and service providers: Amazon Kinesis Data Firehose is integrated with Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service.
  2. Serverless data transformation: It can easily convert raw streaming data from your data sources into formats like Apache Parquet and Apache ORC required by your destination data stores, without having to build your own data processing pipelines.
  3. Near real-time: Users can loads new data into their destinations within 60 seconds after the data is sent to the service. As a result, you can access new data sooner and react to business and operational events faster.
  4. Pay only for what you use: With Amazon Kinesis Data Firehose, users need to pay only for the volume of data you transmit through the service, and if applicable, for data format conversion.

Check Out : Overview Of SDLC Automation

Amazon Kinesis Data Analytics

Hotspot

Amazon Kinesis Data Analytics is a brand-new machine learning feature recently introduced to detect “hotspots” in the streaming data. It is basically a real-time processing engine that lets you write and execute SQL queries to extract meaningful information from the data. It supplies the output or results to Kinesis Data Streams. There is this function called “HOTSPOTS” which enhances the existing machine learning capabilities. It further allows clients to drag unsupervised streaming-based machine learning algorithms.

You don’t need to explicitly build and train complicated machine learning models. The HOTSPOTS function has some simple syntax and accepts data types such as DOUBLE, FLOAT, TINYINT, REAL, INTEGER, etc. The input accepted by this function is in the form of the cursor and the output returned is in the form of a JSON string. You can stream the hotspots programmatically out to either a Kinesis Data Stream or an AWS Lambda function.

Benefits offered by AWS Kinesis Data Analytics

  1. Powerful real-time processing: It provides built-in functions to filter, aggregate, and transform streaming data for advanced analytics. It processes streaming data with sub-second latencies, enabling you to analyze and respond to incoming data and events in real-time.
  2. No servers to manage: It runs your streaming applications without requiring you to provision or manage any infrastructure. Amazon Kinesis Data Analytics automatically scales the infrastructure up and down as required to process incoming data.
  3. Pay only for what you use: With Amazon Kinesis Data Analytics, you only pay for the processing resources that your streaming applications use.

To Know More About AWS OpsWorks 

Scenarios where Amazon Kinesis is Being Applied

  • Building real-time applications:
    Amazon Kinesis comes into existence for various purposes such as fraud detection, live leaderboards, and application monitoring. Kinesis Data Streams can be used to ingest streaming data which can be processed further using Kinesis Data Analytics. These results are then radiated to any application or data store using Kinesis Data Streams.
  • Analysis of IoT Device Data:
    Streaming data coming from IoT devices such as embedded sensors, consumer appliances, and TV set-top boxes can be processed using Amazon Kinesis. Data can be used to transmit real-time alerts or take actions when a sensor exceeds certain operating thresholds.
  • Building Video Analytics Applications:
    Video can be streamed securely from camera-equipped devices at home or offices or public places to AWS using Amazon Kinesis. It will serve purposes such as security monitoring, machine learning, face detection, playback, and other various analytics.

Read more:  AWS Storage 

Kinesis Data Streams vs SQS

Kinesis-Data-Streams-vs-SQS_edAmazon Kinesis Data Streams

  • allows real-time processing of streaming big data and the ability to read and replay records to multiple Amazon Kinesis Applications.
  • Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the same record processor, making it easier to build multiple applications that read from the same Amazon Kinesis stream (for example, to perform counting, aggregation, and filtering).

Check out:  AWS DevOps 

Amazon SQS

  • offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices.
  • It moves data between distributed application components and helps decouple these components.
  • provides common middleware constructs such as dead-letter queues and poison-pill management.
  • provides a generic web services API and can be accessed by any programming language that the AWS SDK supports.
  • supports both standard and FIFO queues

Amazon Kinesis comes with extraordinary features and capabilities of supporting Kinesis Data StreamsKinesis Video StreamsKinesis Data Analytics, and Kinesis Data Firehose.

AWS WAF (Web Application Firewall)

 

What is AWS WAF

AWS Web Application Firewall (WAF) is a security tool that helps you to protect the application against web attacks. WAF monitors and controls unusual bot traffic, and blocks common attack patterns, such as SQL Injection or Cross-site scripting, etc. It also lets you monitor the HTTP and HTTPS requests that are forwarded to an Amazon API Gateway API, Amazon CloudFront, or an Application Load Balancer.

  • Amazon WAF allows you to control your content by using an IP address from where the request originates.
  • Three things make Amazon WAF work – Access control lists (ACL), Rules, and Rule Groups.
  • Amazon WAF manages Web ACL capacity units (WCU) for rules, rule groups, and web ACLs.
  • Amazon WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of security rules.

Common Web Attacks

Before protecting your applications, you need to know the most common web attacks mentioned below.

Common Web Attacks

DDoS(Denial-Of-Service) attacks: This is probably the most common attack. Attackers overload an application by sending bulk requests to the web servers. Thousands of hosts infected with malware are used in this attack, which utilizes more than one unique IP address or machine. This slows down the application and significantly hurt the value of a brand.

SQL injections:  SQL injection is a code injection procedure that might destroy your SQL database. Attackers can run malicious SQL queries on your web applications.

Cross-Site Scripting: If your application is vulnerable to cross-site scripting, then the attacker can run or inject malicious scripts, generally in the form of a browser side script. These scripts can even rewrite the content of the HTML pages.

Also Check: GCP vs AWS vs Azure, Know their major differences!

AWS WAF Features

Amazon Web Application Firewall offers lots of features to its users mentioned below.

  • Protection Against Web Attacks: With minimum latency impact on incoming traffic, WAF AWS offers many rules to inspect any element of a web request. WAF AWS protects web applications against threats by filtering traffic according to the rules created.
  • Establish Rules Accordingly: WAF AWS is a versatile and valuable tool for protecting the infrastructures of applications. And this is because it allows users to establish rules according to their needs and vulnerabilities that they wish to stop. We can consider it a great solution to protect any web application environment at the enterprise level.
  • Web traffic filtering: WAF allows users to create rules to filter web traffic. It filters IP addresses, HTTP headers, HTTP bodies, or URI strings from a web request.
  • Flexible Integration With AWS Services: AWS Firewall offers easy integration with other AWS services like Amazon EC2, CloudFront, Load balancer, etc.
  • Monitor Rules: Web Application Firewall AWS allows us to create rules and review and customize them to prevent unknown attracts.

How It Works

AWS Web Application Firewall protects the applications from malicious attacks. The working of WAF in AWS mentioned below.

  • AWS Firewall Manage: It Manages multiple AWS Web Application Firewall Deployments
  • AWS WAF: Protect deployed applications from common web exploits.
  • Create a Policy: Now you can build your own rules using the visual rule builder.
  • Block Filter: Block filters protect against exploits and vulnerabilities attacks.
  • Monitor: Use Amazon CloudWatch for incoming traffic metrics & Amazon kinesis firehose for request details, then tune rules based on metrics and log data.

How WAF Works

Getting Started With AWS WAF

WAF AWS monitors all the web incoming and outgoing requests that are forwarded to API Gateway, Amazon CloudFront, and Application Load Balancer. We will see how to get started with WAF and create web ACL in some steps.

Step 1: Create web ACL: First, sign-up for an AWS account, then go to AWS Console and search for Web Application Firewall. You will land on the WAF home page, and choose to Create Web ACL.

Start creating web acl

Step 2: Give a Name: Type the name you want to use to identify this web ACL. After that, enter Description if you want (optional) and then hit Next.

Type name and description

Step 3: Add an AWS Managed Rules rule group: In the next step, you need to add rules and rule groups. Click on Add managed rule groups. You will land on a new page to manage the ruling group mentioned in snapshot 2.

add managed rule

AWS Managed Rules provides you with a collection of managed rule groups. Majority of these are free for Amazon WAF users. After adding managed rule group, choose to save the rule.

The rules we’re going to create will define the patterns we want to allow/block. We’ll add 2 rules only.

  1. Regular rule: This rule protects the application from SQL injection attacks. It will check if the URI path contains an SQL injection.
  2. Rate-based rule: This rule blocks the requests made from the same IP address after they exceed a certain limit in a time period.

aws waf aws managed rule groups

After that, review and set rule priority and hit Next

set rule priority

Step 4: Review Web ACL Configuration: In the final step, check all the rules and managed groups and hit on create web ACL.

Create web ACL

Finally, a message will pop up You Successfully created web ACL: ACL-name

You Successfully created web ACL