Monday 25 April 2022

AWS SECURITY GROUPS

 SECURITY GROUPS:

A security group acts as a virtual firewall that controls the traffic for one

or more instances.

When you launch an instance, you associate one or more security groups

with the instance.

You add rules to each security group that allow traffic to or from its

associated instances.

You can modify the rules for a security group at any time; the new rules

are automatically applied to all instances that are associated with the

security group.




Click on Security groups section under NETWORK & SECURITY from

the left pane.



Click on security group and select your instance 

Choose edit inbound rules to edit the security group.



Select specific rule which is required, I selected HTTP to open 80 port on the security group.

You can either select Custom TCP rule or Custom UDP rule and specify the port which you want to open.

Or you can specify TCP or UDP and specify a range of IP addresses (20-100).




Then select source IP address from by selecting Source drop down list either Custom IP, Anywhere or My IP then click create.


Then click on Save Rules.

The same will applies to Outbound as well, where we can restrict either by Port or by IP address.


CHANGING OR ADDING SECURITY GROUPS TO YOUR EXISTING INSTANCES:



Click the instance from Instance tab and select the instance, then click on Actions.

Click on Security and select  Change security Groups



By Clicking on ADD SECURITY GROUP , You can change the security group 




Click on save option to change and save your security group.


AWS Changing Instance-Types

 Amazon EC2 provides a wide selection of instance types optimized to fit different

use cases. Instance types comprise varying combinations of CPU, memory,

storage, and networking capacity and give you the flexibility to choose the

appropriate mix of resources for your applications. Each instance type includes one

or more instance sizes, allowing you to scale your resources to the requirements of

your target workload.


We have few types of the instances available, those categorized in to the following.

 General Purpose

 Compute Optimized

 Memory Optimized

 Accelerated Computing

 Storage Optimized


You can check the full list of the instance types from the below URL.

Website: https://aws.amazon.com/ec2/instance-types/


CHANGING INSTANCE TYPES

We can change instance types from one type to another with few clicks, but there

is down time involved here because we must shut down the instance to change

instance type.

To change the instance type, select your instance and go to actions then instance

state then stop.



Once the instance has stopped, make sure you select the instance and go to Actions

tab, expand Instance Settings, then choose Change Instance Type.



Next, Change instance type wizard will open, choose the new instance type from

instance type drop down list.



Next, choose apply.






Now, you can able to see the new instance for the instance under the instance type
tab







Now you can start the instance and it will start with the new m4.large CPU and
RAM configuration.


Sunday 24 April 2022

AWS CONNECTING TO INSTANCES

  • SELECT THE INSTANCE AND CLICK ON CONNECT BUTTON TO LAUNCH THE INSTANCE




  • AFTER CONNECTING THE INSTANCE CLICK ON SSH CLIENT 
  • AND COPY THE HIGHLIGHTED EC2 USER


  • PASTE THE HIGHLIGHTED EC2 USER IN HOSTNAME BOX


  • SELECT SSH IN CATEGORY 
  • AND SELECT AUTH TO PUT THE PPK FILE IN PUTTY


  • AFTER DROPPING YOUR SELECTED  PPK FILE IN BROWSE MENU
  •  CLICK OPEN IN PUTTY  AND THEN THE INTANCE WILL LAUNCH IN NEW WINDOW.

  • SELECT ACCEPT
  • EC2 INSTANCE WILL LAUNCH SOON...





AWS CREATING KEYPAIR

 ONCE YOU LOGGED IN SELECT YOUR REGION TOP RIGHT SIDE DROP DOWN MENU AS SHOWN BELOW :-

SELECT REGION :



THEN CLICK ON EC2 MENU :-



CLICK ON LAUNCH INSTANCES TO CREATE INSTANCE



  • ENTER THE NAME OF YOUR INSTANCE
  • SELECT AN ISO IMAGE  WHICH IS FREE TIER ELIGIBLE...




  • CREATE A NEW KEY PAIR
  • ENTER THE NAME MANUALLY
  • AND SELECT (.ppk) TO GENERATE .PPK FILE  AND IT WILL BE DOWNLOADED AUTOMATICALLY
  • CLICK ON LAUNCH INSTANCE




AWS SIGN UP PAGE

                                             AWS SIGN UP STEP BY STEP PROCESS :


OPEN BELOW URL AND SIGN UP OR LOGIN YOUR AWS ACCOUNT

https://aws.amazon.com/console/





                                    CLICK ON CREATE A NEW AWS ACCOUNT 





       VERIFY THE EMAIL ADDRESS 








    
  • AFTER VERIFYING THE EMAIL ADDRESS 
  • ENTER THE VERIFICATION CODE MANUALLY





ENTER THE ROOT PASSWORD AND CLICK ON CREATE ACCOUNT.....


YOU WILL BE PRESENTED WTIH BELOW SCREEN ......



ENTER THE PHONE NUMBER TO VERIFY  YOUR  AWS ACCOUNT






CLICK CONTINUE TO MOVE ON TO THE NEXT STEP ...




  • ENTER THE CARD DETAILS AND CLICK OK
  • AND NEXT YOUR AWS ACCOUNT WILL BE CREATED AND YOU CAN LOGIN USING YOUR MAIL ID AND PASSWORD
  • AFTER 24HRS YOU MAY USE ALL THE FEATURES OF AWS............


Saturday 26 March 2022

Google BigQuery vs BigTable

 

BigQuery

BigTable

  • BigQuery is Google Cloud’s fully managed, petabyte-scale, and cost-effective analytics data warehouse that lets you run analytics over vast amounts of data in near real-time.
  • You can use bq command-line tool or Google Cloud Console to interact with BigTable.
  • You can access BigQuery by using the Cloud Console, by using the bq command-line tool, or by making calls to the BigQuery REST API using a variety of client libraries such as Java, .NET, or Python.
  • A dataset is contained within a specific project. Datasets are top-level containers that are used to organize and control access to your tables and views.
  • You specify a location for storing your BigQuery data when you create a dataset. After you create the dataset, the location cannot be changed, but you can copy the dataset to a different location, or manually move (recreate) the dataset in a different location.
  • You can set control access to datasets in BigQuery at table and view level, column-level, or use IAM.
  • There are several ways to ingest data into BigQuery:
  • Batch load a set of data records.
  • Stream individual records or batches of records.
  • Use queries to generate new data and append or overwrite the results to a table.
  • Use a third-party application or service.
  • Data loaded in BigQuery can be exported in several formats. BigQuery can export up to 1 GB of data to a single file. If you are exporting more than 1 GB of data, you must export your data to multiple files. When you export your data to multiple files, the size of the files will vary.
  • Jobs are actions that BigQuery runs on your behalf to load data, export data, query data, or copy data.
  • An external data source (also known as a federated data source) is a data source that you can query directly even though the data is not stored in BigQuery. Instead of loading or streaming the data, you create a table that references the external data source.
  • A fully managed, scalable NoSQL database service for large analytical and operational workloads.
  • You can use cbt command-line tool or Google Cloud Console to interact with BigTable.
  • Cloud Bigtable is a sparsely populated table that can scale to billions of rows and thousands of columns, enabling you to store terabytes or even petabytes of data. A single value in each row is indexed; this value is known as the row key.
  • Cloud Bigtable is ideal for storing very large amounts of single-keyed data with very low latency. It supports high read and write throughput at low latency, and it is an ideal data source for MapReduce operations.
  • Cloud Bigtable stores data in massively scalable tables, each of which is a sorted key/value map. The table is composed of rows, each of which typically describes a single entity, and columns, which contain individual values for each row. Each row is indexed by a single row key, and columns that are related to one another are typically grouped together into a column family. Each column is identified by a combination of the column family and a column qualifier, which is a unique name within the column family.
  • To use Cloud Bigtable, you create instances, which contain up to 4 clusters that your applications can connect to. Each cluster contains nodes, the compute units that manage your data and perform maintenance tasks.
  • A Cloud Bigtable instance is a container for your data. Instances have one or more clusters, located in different zones. Each cluster has at least 1 node.
  • Cloud Bigtable backups let you save a copy of a table’s schema and data, then restore from the backup to a new table at a later time.
  • Dataflow templates allow you to export data from Cloud Bigtable from a variety of data types then import the data back into Cloud Bigtable.
  • Replication for Cloud Bigtable enables you to increase the availability and durability of your data by copying it across multiple regions or multiple zones within the same region. You can also isolate workloads by routing different types of requests to different clusters.
  • You can use Dataproc to create one or more Compute Engine instances that can connect to a Cloud Bigtable instance and run Hadoop jobs.

Google Cloud Functions vs App Engine vs Cloud Run vs GKE

 Serverless compute platforms like Cloud Functions, App Engine, and Cloud Run lets you build, develop, and deploy applications while simplifying the developer experience by eliminating all infrastructure management.

On the other hand, Google Kubernetes Engine (GKE) runs Certified Kubernetes that helps you facilitate the orchestration of containers via declarative configuration and automation.

Both Google serverless platforms and GKE allows you to scale your application based on your infrastructure requirement. Here’s a table to help you identify when to use these specific services.

Cloud Functions

App Engine

  • Cloud Functions is a fully managed, serverless platform for creating stand-alone functions that respond to real-time events without the need to manage servers, configure software, update frameworks, and patch operating systems.
  • With Cloud Functions, you write simple, single-purpose functions that are attached to events produced from your cloud infrastructure and services.
  • Cloud Functions can be written using JavaScript, Python 3, Go, or Java runtimes which make both portability and local testing more familiar.
  • Functions are stateless. The execution environment is often initialized from scratch, which is called a cold start and they take significant amounts of time to complete.
  • It is a serverless execution environment that can be used for building and connecting your cloud services. It can serve IoT workloads, ETL, webhooks, Kafka messages, analytics, and event-driven services.
  • Cloud Functions are great for building serverless backends, doing real-time data processing, and creating intelligent apps.
  • App Engine is a fully managed, serverless platform for hosting and developing highly scalable web applications. It lets you focus on your code while App Engine manages infrastructure concerns.
  • You can scale your applications from zero to planet-scale without having to worry and manage infrastructure.
  • You can build your application in Node.js, Java, Ruby, C#, Go, Python, or PHP runtimes. Moreover, you can also bring any library and framework to App Engine by supplying a Docker container.
  • Each Cloud project can only contain a single App Engine application. Once App Engine is created on a project, you are not allowed to change the location of your application.
  • App Engine can seamlessly host different versions of your application, and help you effortlessly create development, test, staging, and production environments.
  • With App Engine, you can route incoming traffic to different versions of your application, A/B test it, and perform incremental feature rollouts by using traffic splitting.
  • App Engine easily integrates with Cloud Monitoring and Cloud Logging to monitor your app’s health and performance. It also works with Cloud Debugger and Error Reporting to help you diagnose and fix bugs quickly.
  • You can run your applications in App Engine using the standard or flexible environments. You are allowed to simultaneously use both environments for your application to take advantage of each environment’s individual benefits.

Cloud Run

Google Kubernetes Engine (GKE)

  • Cloud Run is a managed serverless compute platform that helps you run highly scalable containerized applications that can be invoked via web requests or Pub/Sub events.
  • It is built upon an open standard Knative, that enables the portability of your applications
  • You can pick the programming language of your choice, any operating system libraries, or even bring your own binaries.
  • You can leverage container workflows since Cloud Run integrates well with services in the container ecosystem like Cloud Build, Artifact Registry, Docker.
  • Your container instances run in a secure sandbox environment isolated from other resources.
  • With Cloud Run, you can automatically scale up or down from zero to N depending on traffic. 
  • Cloud Run services are regional and are automatically replicated across multiple zones.
  • Cloud Run provides an out-of-the-box integration with Cloud Monitoring, Cloud Logging, Cloud Trace, and Error Reporting to monitor the health performance of an application.
  • Google Kuberenetes Engine (GKE) is a managed Kubernetes service that facilitates the orchestration of containers via declarative configuration and automation.
  • It integrates with Identity Access Management (IAM) to control access in the cluster with your Google accounts and role permissions you set.
  • GKE runs Certified Kubernetes. This enables portability to other Kubernetes platforms across cloud and on-premises workloads.
  • You can eliminate operational overhead expenses by enabling auto-repair, auto-upgrade, and release channels
  • GKE lets you reserve a CIDR range for your cluster, allowing your cluster IPs to coexist with private network IPs via Google Cloud VPN.
  • With GKE, you can choose clusters designed to the availability, version stability, isolation, and pod traffic requirements of your mission-critical workloads.
  • You can automatically scale your application deployment up and down based on CPU and memory utilization.
  • By default, your cluster nodes are automatically updated with the latest release version of Kubernetes. Kubernetes release updates are quickly made available within GKE.
  • Google Kubernetes Engine integrates well with Cloud Logging and Cloud Monitoring via Cloud Console, making it easy to gain insight into your application.