Wednesday, 18 May 2022

Linux Basic Commands


Monday, 25 April 2022

AWS SECURITY GROUPS

 SECURITY GROUPS:

A security group acts as a virtual firewall that controls the traffic for one

or more instances.

When you launch an instance, you associate one or more security groups

with the instance.

You add rules to each security group that allow traffic to or from its

associated instances.

You can modify the rules for a security group at any time; the new rules

are automatically applied to all instances that are associated with the

security group.




Click on Security groups section under NETWORK & SECURITY from

the left pane.



Click on security group and select your instance 

Choose edit inbound rules to edit the security group.



Select specific rule which is required, I selected HTTP to open 80 port on the security group.

You can either select Custom TCP rule or Custom UDP rule and specify the port which you want to open.

Or you can specify TCP or UDP and specify a range of IP addresses (20-100).




Then select source IP address from by selecting Source drop down list either Custom IP, Anywhere or My IP then click create.


Then click on Save Rules.

The same will applies to Outbound as well, where we can restrict either by Port or by IP address.


CHANGING OR ADDING SECURITY GROUPS TO YOUR EXISTING INSTANCES:



Click the instance from Instance tab and select the instance, then click on Actions.

Click on Security and select  Change security Groups



By Clicking on ADD SECURITY GROUP , You can change the security group 




Click on save option to change and save your security group.


AWS Changing Instance-Types

 Amazon EC2 provides a wide selection of instance types optimized to fit different

use cases. Instance types comprise varying combinations of CPU, memory,

storage, and networking capacity and give you the flexibility to choose the

appropriate mix of resources for your applications. Each instance type includes one

or more instance sizes, allowing you to scale your resources to the requirements of

your target workload.


We have few types of the instances available, those categorized in to the following.

 General Purpose

 Compute Optimized

 Memory Optimized

 Accelerated Computing

 Storage Optimized


You can check the full list of the instance types from the below URL.

Website: https://aws.amazon.com/ec2/instance-types/


CHANGING INSTANCE TYPES

We can change instance types from one type to another with few clicks, but there

is down time involved here because we must shut down the instance to change

instance type.

To change the instance type, select your instance and go to actions then instance

state then stop.



Once the instance has stopped, make sure you select the instance and go to Actions

tab, expand Instance Settings, then choose Change Instance Type.



Next, Change instance type wizard will open, choose the new instance type from

instance type drop down list.



Next, choose apply.






Now, you can able to see the new instance for the instance under the instance type
tab







Now you can start the instance and it will start with the new m4.large CPU and
RAM configuration.


Sunday, 24 April 2022

AWS CONNECTING TO INSTANCES

  • SELECT THE INSTANCE AND CLICK ON CONNECT BUTTON TO LAUNCH THE INSTANCE




  • AFTER CONNECTING THE INSTANCE CLICK ON SSH CLIENT 
  • AND COPY THE HIGHLIGHTED EC2 USER


  • PASTE THE HIGHLIGHTED EC2 USER IN HOSTNAME BOX


  • SELECT SSH IN CATEGORY 
  • AND SELECT AUTH TO PUT THE PPK FILE IN PUTTY


  • AFTER DROPPING YOUR SELECTED  PPK FILE IN BROWSE MENU
  •  CLICK OPEN IN PUTTY  AND THEN THE INTANCE WILL LAUNCH IN NEW WINDOW.

  • SELECT ACCEPT
  • EC2 INSTANCE WILL LAUNCH SOON...





AWS CREATING KEYPAIR

 ONCE YOU LOGGED IN SELECT YOUR REGION TOP RIGHT SIDE DROP DOWN MENU AS SHOWN BELOW :-

SELECT REGION :



THEN CLICK ON EC2 MENU :-



CLICK ON LAUNCH INSTANCES TO CREATE INSTANCE



  • ENTER THE NAME OF YOUR INSTANCE
  • SELECT AN ISO IMAGE  WHICH IS FREE TIER ELIGIBLE...




  • CREATE A NEW KEY PAIR
  • ENTER THE NAME MANUALLY
  • AND SELECT (.ppk) TO GENERATE .PPK FILE  AND IT WILL BE DOWNLOADED AUTOMATICALLY
  • CLICK ON LAUNCH INSTANCE




AWS SIGN UP PAGE

                                             AWS SIGN UP STEP BY STEP PROCESS :


OPEN BELOW URL AND SIGN UP OR LOGIN YOUR AWS ACCOUNT

https://aws.amazon.com/console/





                                    CLICK ON CREATE A NEW AWS ACCOUNT 





       VERIFY THE EMAIL ADDRESS 








    
  • AFTER VERIFYING THE EMAIL ADDRESS 
  • ENTER THE VERIFICATION CODE MANUALLY





ENTER THE ROOT PASSWORD AND CLICK ON CREATE ACCOUNT.....


YOU WILL BE PRESENTED WTIH BELOW SCREEN ......



ENTER THE PHONE NUMBER TO VERIFY  YOUR  AWS ACCOUNT






CLICK CONTINUE TO MOVE ON TO THE NEXT STEP ...




  • ENTER THE CARD DETAILS AND CLICK OK
  • AND NEXT YOUR AWS ACCOUNT WILL BE CREATED AND YOU CAN LOGIN USING YOUR MAIL ID AND PASSWORD
  • AFTER 24HRS YOU MAY USE ALL THE FEATURES OF AWS............


Saturday, 26 March 2022

Google BigQuery vs BigTable

 

BigQuery

BigTable

  • BigQuery is Google Cloud’s fully managed, petabyte-scale, and cost-effective analytics data warehouse that lets you run analytics over vast amounts of data in near real-time.
  • You can use bq command-line tool or Google Cloud Console to interact with BigTable.
  • You can access BigQuery by using the Cloud Console, by using the bq command-line tool, or by making calls to the BigQuery REST API using a variety of client libraries such as Java, .NET, or Python.
  • A dataset is contained within a specific project. Datasets are top-level containers that are used to organize and control access to your tables and views.
  • You specify a location for storing your BigQuery data when you create a dataset. After you create the dataset, the location cannot be changed, but you can copy the dataset to a different location, or manually move (recreate) the dataset in a different location.
  • You can set control access to datasets in BigQuery at table and view level, column-level, or use IAM.
  • There are several ways to ingest data into BigQuery:
  • Batch load a set of data records.
  • Stream individual records or batches of records.
  • Use queries to generate new data and append or overwrite the results to a table.
  • Use a third-party application or service.
  • Data loaded in BigQuery can be exported in several formats. BigQuery can export up to 1 GB of data to a single file. If you are exporting more than 1 GB of data, you must export your data to multiple files. When you export your data to multiple files, the size of the files will vary.
  • Jobs are actions that BigQuery runs on your behalf to load data, export data, query data, or copy data.
  • An external data source (also known as a federated data source) is a data source that you can query directly even though the data is not stored in BigQuery. Instead of loading or streaming the data, you create a table that references the external data source.
  • A fully managed, scalable NoSQL database service for large analytical and operational workloads.
  • You can use cbt command-line tool or Google Cloud Console to interact with BigTable.
  • Cloud Bigtable is a sparsely populated table that can scale to billions of rows and thousands of columns, enabling you to store terabytes or even petabytes of data. A single value in each row is indexed; this value is known as the row key.
  • Cloud Bigtable is ideal for storing very large amounts of single-keyed data with very low latency. It supports high read and write throughput at low latency, and it is an ideal data source for MapReduce operations.
  • Cloud Bigtable stores data in massively scalable tables, each of which is a sorted key/value map. The table is composed of rows, each of which typically describes a single entity, and columns, which contain individual values for each row. Each row is indexed by a single row key, and columns that are related to one another are typically grouped together into a column family. Each column is identified by a combination of the column family and a column qualifier, which is a unique name within the column family.
  • To use Cloud Bigtable, you create instances, which contain up to 4 clusters that your applications can connect to. Each cluster contains nodes, the compute units that manage your data and perform maintenance tasks.
  • A Cloud Bigtable instance is a container for your data. Instances have one or more clusters, located in different zones. Each cluster has at least 1 node.
  • Cloud Bigtable backups let you save a copy of a table’s schema and data, then restore from the backup to a new table at a later time.
  • Dataflow templates allow you to export data from Cloud Bigtable from a variety of data types then import the data back into Cloud Bigtable.
  • Replication for Cloud Bigtable enables you to increase the availability and durability of your data by copying it across multiple regions or multiple zones within the same region. You can also isolate workloads by routing different types of requests to different clusters.
  • You can use Dataproc to create one or more Compute Engine instances that can connect to a Cloud Bigtable instance and run Hadoop jobs.