Sunday, 20 March 2022

VPC Peering

 

  • A networking connection between two VPCs that enables you to route traffic between them privately using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network.
  • You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region (also called Inter-Region VPC Peering).
  • A VPC peering connection is neither a gateway nor a AWS Site-to-Site VPN connection, and does not rely on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck.
  • VPC Peering does NOT support edge-to-edge routing. You can create multiple VPC peering connections for each VPC that you own, but transitive peering relationships are not supported.
    • In the following diagram, VPC B is not peered with VPC C.

VPC Peering Diagram

  • Establishing A Peering Connection
    • The owner of the requester VPC sends a request to the owner of the accepter VPC to create the VPC peering connection. The accepter VPC cannot have a CIDR block that overlaps with the requester VPC’s CIDR block.
    • To enable the flow of traffic between the VPCs using private IP addresses, the owner of each VPC in the VPC peering connection must manually add a route to one or more of their VPC route tables that points to the IP address range of the other VPC (the peer VPC).
    • Update the security group rules that are associated with your instance to ensure that traffic to and from the peer VPC is not restricted.
    • By default, if instances on either side of a VPC peering connection address each other using a public DNS hostname, the hostname resolves to the instance’s public IP address. To change this behavior, enable DNS hostname resolution for your VPC connection. This will allow the DNS hostname to resolve to the instance’s private IP address.
  • Limitations
    • You cannot create a VPC peering connection between VPCs that have matching or overlapping IPv4 or IPv6 CIDR blocks.
    • You cannot have more than one VPC peering connection between the same two VPCs at the same time.
    • Unicast reverse path forwarding in VPC peering connections is not supported.
    • If the VPCs are in the same region, you can enable the resources on either side of a VPC peering connection to communicate with each other over IPv6.
    • For inter-region peering, you cannot create a security group rule that references a peer VPC security group. Communication over IPv6 is not supported as well.

AWS Transit Gateway

 

  • A networking service that uses a hub and spoke model to enable customers to connect their on-premises data centers and their Amazon Virtual Private Clouds (VPCs) to a single gateway.
  • With this service, customers only have to create and manage a single connection from the central gateway into each on-premises data center, remote office, or VPC across your network.
  • If a new VPC is created, it is automatically connected to the Transit Gateway and will also be available to every other network that is also connected to the Transit Gateway.

Features

  • Inter-region peering 
    • Transit Gateway leverages the AWS global network to allow customers to route traffic across AWS Regions. 
    • Inter-region peering provides an easy and cost-effective way to replicate data for geographic redundancy or to share resources between AWS Regions.

  • Multicast 
    • Enables customers to have fine-grain control on who can consume and produce multicast traffic. 
    • It allows you to easily create and manage multicast groups in the cloud instead of the time-consuming task of deploying and managing legacy hardware on-premises. 
    • This multicast solution is also scalable so the customers can simultaneously distribute a stream of content to multiple subscribers. 
  • Automated Provisioning 
    • Customers can automatically identify the Site-to-Site VPN connections and the on-premises resources with which they are associated using AWS Transit Gateway. 
    • Using the Transit Gateway Network Manager, you can also manually define your on-premises network.

AWS Global Accelerator

 

  • A service that uses the AWS Global Network to improve the availability and performance of your applications to your local and global users. 
  • It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances.
  • AWS Global Accelerator continually monitors the health of your application endpoints and will detect an unhealthy endpoint and redirect traffic to healthy endpoints in less than 1 minute.
  • How It Works

Amazon Global Accelerator

  • Concepts
    • An accelerator is the resource you create to direct traffic to optimal endpoints over the AWS global network.
    • Network zones are isolated units with their own set of physical infrastructure and service IP addresses from a unique IP subnet.
    • AWS Global Accelerator provides you with a set of two static IP addresses that are anycast from the AWS edge network. It also assigns a default Domain Name System (DNS) name to your accelerator, similar to a1234567890abcdef.awsglobalaccelerator.com, that points to the static IP addresses.
    • listener processes inbound connections from clients to Global Accelerator, based on the port (or port range) and protocol that you configure. Global Accelerator supports both TCP and UDP protocols.
    • Each endpoint group is associated with a specific AWS Region. Endpoint groups include one or more endpoints in the Region.
    • Endpoints can be Network Load Balancers, Application Load Balancers, EC2 instances, or Elastic IP addresses.
  • Benefits
    • Instant regional failover – AWS Global Accelerator automatically checks the health of your applications and routes user traffic only to healthy application endpoints. If the health status changes or you make configuration updates, AWS Global Accelerator reacts instantaneously to route your users to the next available endpoint. Monitors the health of your application endpoints by using TCP, HTTP, and HTTPS health checks.
    • High availability – AWS Global Accelerator has a fault-isolating design that increases the availability of your application. When you create an accelerator, you are allocated two IPv4 static IP addresses that are serviced by independent network zones.
    • No variability around clients that cache IP addresses – You do not have to rely on the IP address caching settings of client devices. Change propagation of a configuration update only takes a matter of seconds.
    • Improved performance – AWS Global Accelerator ingresses traffic from the edge location that is closest to your end clients through anycast static IP addresses. Then traffic traverses the AWS global network, which optimizes the path to your application that is running in an AWS Region. AWS Global Accelerator chooses the optimal AWS Region based on the geography of end clients, which reduces first-byte latency and improves performance.
    • Easy manageability – The static IP addresses provided by AWS Global Accelerator are fixed and provide a single entry point to your applications. This lets you easily move your endpoints between Availability Zones or between AWS Regions, without having to update your DNS configuration or client-facing applications.
    • Fine-grained control – AWS Global Accelerator lets you set a traffic dial for your regional endpoint groups, to dial traffic up or down for a specific AWS Region when you conduct performance testing or application updates. In addition, if you have stateful applications, you can choose to direct all requests from a user to the same endpoint, regardless of the source port and protocol, to maintain client affinity. These features give you fine-grained control. By default, the traffic dial is set to 100% for all regional endpoint groups.
    • Also supports bring your own IP (BYOIP) so you can use your own IPs on the AWS edge network.

  • Steps in deploying an accelerator
    1. Create an accelerator
    2. Configure endpoint groups
    3. Register endpoints for endpoint groups
  • Global Accelerator vs CloudFront

Amazon Global Accelerator

Both services integrate with AWS Shield for DDoS protection.

  • Security
    • SOC, PCI, HIPAA, GDPR, and ISO compliant.
    • AWS Global Accelerator Flow Logs provide detailed records about traffic that flows through an accelerator to an endpoint.
    • As a managed service, AWS Global Accelerator is protected by the AWS global network security procedures.
  • Pricing
    • You are charged for each accelerator that is provisioned and the amount of traffic in the dominant direction that flows through the accelerator.
    • For every accelerator that is provisioned (both enabled and disabled), you are charged a fixed hourly fee and an incremental charge over your standard Data Transfer rates per GB, also called a Data Transfer-Premium fee.

AWS Elastic Load Balancing (ELB)

 

  • Distributes incoming application or network traffic across multiple targets, such as EC2 instancescontainers (ECS)Lambda functions, and IP addresses, in multiple Availability Zones.
  • When you create a load balancer, you must specify one public subnet from at least two Availability Zones. You can specify only one public subnet per Availability Zone.

General features

  • Accepts incoming traffic from clients and routes requests to its registered targets.
  • Monitors the health of its registered targets and routes traffic only to healthy targets.
  • Enable deletion protection to prevent your load balancer from being deleted accidentally. Disabled by default.
  • Deleting ELB won’t delete the instances registered to it.
  • Cross Zone Load Balancing – when enabled, each load balancer node distributes traffic across the registered targets in all enabled AZs.
  • Supports SSL Offloading which is a feature that allows the ELB to bypass the SSL termination by removing the SSL-based encryption from the incoming traffic.

Types of Load Balancers

  • Application Load Balancer ( ALB )

    • Functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model.
    • Allows HTTP and HTTPS.
    • At least 2 subnets must be specified when creating this type of load balancer.
    • Components:
      • load balancer serves as the single point of contact for clients.
      • listener checks for connection requests from clients. You must define a default rule for each listener that specifies a target group, condition, and priority.
      • Target group routes requests to one or more registered targets. You can register a target with multiple target groups, and configure health checks on a per target group basis.

AWS Training AWS Elastic Load Balancing 2

    • Benefits
      • Support for path-based and host-based routing.
      • Support for routing requests to multiple applications on a single EC2 instance.
      • Support for registering targets by IP address, including targets outside the VPC for the load balancer.
      • Support for containerized applications.
      • Support for monitoring the health of each service independently.
      • Support for redirecting requests from one URL to another.
      • Support for returning a custom HTTP response.
      • Support for the load balancer to authenticate users of your applications before routing requests using OIDC-compliant identity providers and/or Amazon Cognito user pools.
      • Support for registering Lambda functions as targets.
      • Supports load balancer-generated cookies for sticky sessions.
      • Supports Application-based cookie stickiness. This ensures that clients connect to the same load balancer target for the duration of their session using application cookies.
    • Cross-zone load balancing is always enabled.
    • If you specify targets using an instance ID, traffic is routed to instances using the primary private IP address specified in the primary network interface for the instance. If you specify targets using IP addresses, you can route traffic to an instance using any private IP address from one or more network interfaces.
    • You can also specify Lambda functions are targets to serve HTTP(S) requests.
    • HTTP/2 Support
    • WebSockets Support
    • Monitoring:
      • CloudWatch metrics – retrieve statistics about data points for your load balancers and targets as an ordered set of time-series data, known as metrics.
      • Access logs – capture detailed information about the requests made to your load balancer and store them as log files in S3.
      • Request tracing – track HTTP requests.
      • CloudTrail logs – capture detailed information about the calls made to the Elastic Load Balancing API and store them as log files in S3.
  • Network Load Balancer ( NLB )

    • Functions at the fourth layer of the Open Systems Interconnection (OSI) model. Uses TCP and UDP connections.
    • At least 1 subnet must be specified when creating this type of load balancer, but the recommended number is 2.
    • Components:
      • A load balancer serves as the single point of contact for clients.
      • A listener checks for connection requests from clients.
      • A target group routes requests to one or more registered targets. You can register a target with multiple target groups. You can configure health checks on a per target group basis.
    • Benefits
      • Ability to handle volatile workloads and scale to millions of requests per second.
      • Support for static IP addresses for the load balancer, or assign one Elastic IP address per subnet enabled for the load balancer.
      • Support for registering targets by IP address.
      • Support for routing requests to multiple applications on a single EC2 instance (register each instance or IP address with the same target group using multiple ports).
      • Support for containerized applications.
      • Support for monitoring the health of each service independently.
    • Cross-zone load balancing is disabled by default.
    • If you specify targets using an instance ID, the source IP addresses of the clients are preserved and provided to your applications. If you specify targets by IP address, the source IP addresses are the private IP addresses of the load balancer nodes.
    • Network Load Balancers support connections from clients over inter-region VPC peering, AWS managed VPN, and third-party VPN solutions.
    • You can deploy services that rely on the UDP protocol, such as Authentication and Authorization, Logging, DNS, and IoT, behind a Network Load Balancer
    • Offers multi-protocol listeners, allowing you to run applications such as DNS that rely on both TCP and UDP protocols on the same port behind a Network Load Balancer.
    • You CANNOT enable or disable Availability Zones for a Network Load Balancer after you create it.
    • Network Load Balancers use Proxy Protocol version 2 to send additional connection information such as the source and destination.
    • Preserves the client-side source IP allowing the back-end to see the IP address of the client. This can then be used by applications for further processing.
    • Automatically provides a static IP per Availability Zone (subnet) that can be used by applications as the front-end IP of the load balancer.
    • Zonal Isolation
    • In the event that your Network load balancer is unresponsive, integration with Route 53 will remove the unavailable load balancer IP address from service and direct traffic to an alternate Network Load Balancer in another region.
    • Supports TLS termination on Network Load Balancers. Additionally, Network Load Balancers preserve the source IP of the clients to the back-end applications, while terminating TLS on the load balancer.
    • Monitoring:
      • CloudWatch metrics – retrieve statistics about data points for your load balancers and targets as an ordered set of time-series data, known as metrics.
      • VPC Flow Logs – capture detailed information about the traffic going to and from your Network Load Balancer.
      • CloudTrail logs – capture detailed information about the calls made to the Elastic Load Balancing API and store them as log files in Amazon S3.
  • Gateway Load Balancer ( GWLB )

    • Primarily used for deploying, scaling, and running third-party virtual appliances.
    • The virtual appliances can be your custom firewalls, deep packet inspection systems, or intrusion detection and prevention systems in AWS
    • Uses the Internet Protocol (IP) to pass the OSI Layer 3 traffic to its registered targets.
    • GWLB Target groups support the Generic Networking Virtualization Encapsulation (GENEVE) on port: 6081
    • Runs within one Availability Zone (AZ)
    • You cannot specify publicly routable IP addresses as your target
    • You can provision a Gateway Load Balancer Endpoint that creates a secured, low-latency, connections to and from the virtual appliance
    • Does not support SSL Offloading, Server Name Indication (SNI), Back-end Server Encryption, User Authentication, Custom Security Policy or Application-Layer Protocol Negotiation (ALPN)
    • Monitoring:
      • CloudWatch metrics –  retrieve statistics about data points for your Gateway Load Balancers and virtual appliances (targets).
      • VPC Flow Logs – capture the information about the incoming and outgoing traffic to and from your Gateway Load Balancers (GWLB).
      • CloudTrail logs –  determine which GWLB API calls were made, the source IP address where the call came from, who made the call and when the call was made.
  • Classic Load Balancer ( CLB )

    • Distributes incoming application traffic across multiple EC2 instances in multiple Availability Zones.
    • For use with EC2 classic only. Register instances with the load balancer. AWS recommends using Application or Network load balancers instead.

AWS Training AWS Elastic Load Balancing 3

    • To ensure that your registered instances are able to handle the request load in each AZ, keep approximately the same number of instances in each AZ registered with the load balancer.
    • Benefits
      • Support for EC2-Classic
      • Support for TCP and SSL listeners
      • Support for sticky sessions using application-generated cookies
    • An Internet-facing load balancer has a publicly resolvable DNS name, so it can route requests from clients over the Internet to the EC2 instances that are registered with the load balancer. Classic load balancers are always Internet-facing.
    • Monitoring:
      • CloudWatch metrics – retrieve statistics about ELB-published data points as an ordered set of time-series data, known as metrics.
      • Access logs – capture detailed information for requests made to your load balancer and stores them as log files in the S3 bucket that you specify.
      • CloudTrail logs – keep track of the calls made to the Elastic Load Balancing API by or on behalf of your AWS account
  • HTTP Headers

    • Application Load Balancers and Classic Load Balancers support X-Forwarded-For, X-Forwarded-Proto, and X-Forwarded-Port headers.

  • Choose whether to make an internal load balancer or an Internet-facing load balancer. Classic Load Balancer in EC2-Classic must be an Internet-facing load balancer.
    • The nodes of an Internet-facing load balancer have public IP addresses.
    • The nodes of an internal load balancer have only private IP addresses.
  • Public DNS name format for your load balancers
    • EC2-VPC : name-1234567890.region.elb.amazonaws.com (supports IPv4 addresses only)
    • EC2-Classic: (support both IPv4 and IPv6 addresses)
      • name-123456789.region.elb.amazonaws.com
      • ipv6.name-123456789.region.elb.amazonaws.com
      • dualstack.name-123456789.region.elb.amazonaws.com
  • Load Balancer States

    • Provisioning – The load balancer is being set up.
    • Active – The load balancer is fully set up and ready to route traffic.
    • Failed – The load balancer could not be set up.
  • By default, ELB idle timeout value to 60 seconds. If a target doesn’t send data at least every 60 seconds while the request is in flight, the load balancer can close the front-end connection. For back-end connections, enable the HTTP keep-alive option for your EC2 instances.
  • You can register each EC2 instance or IP address with the same target group multiple times using different ports, which enables the load balancer to route requests to microservices.
  • Listeners define the port and protocol to listen on.
  • Listener rules determine how the load balancer routes requests to the targets in one or more target groups. You can add rules that specify different target groups based on the content of the request. If no rules are found, the default rule will be followed. Parts are:
    • Rule priority
    • Rule action
    • Rule conditions
  • Slow Start Mode gives targets time to warm up before the load balancer sends them a full share of requests.
  • Sticky sessions route requests to the same target in a target group. You enable sticky sessions at the target group level. You can also set the duration for the stickiness of the load balancer-generated cookie, in seconds. Useful if you have stateful applications.
  • Health checks verify the status of your targets. The statuses for a registered target are:

Value

Description

initial

The load balancer is in the process of registering the target or performing the initial health checks on the target.

healthy

The target is healthy.

unhealthy

The target did not respond to a health check or failed the health check.

unused

The target is not registered with a target group, the target group is not used in a listener rule for the load balancer, or the target is in an Availability Zone that is not enabled for the load balancer.

draining

The target is deregistering and connection draining is in process.

Security, Authentication and Access Control

  • Use IAM Policies to grant permissions
  • Resource-level permissions
  • Security groups that control the traffic allowed to and from your load balancer.
    Recommended rules for internet-facing load balancer:

Inbound

Source

Port Range

0.0.0.0/0

listener

Outbound

Destination

Port Range

instance security group

instance listener

instance security group

health check

For internal load balancer:

Inbound

Source

Port Range

VPC CIDR

listener

Outbound

Destination

Port Range

instance security group

instance listener

instance security group

health check

Summary of Features 

Application Load Balancer vs Network Load Balancer vs Gateway Load Balancer

Pricing

  • You are charged for each hour or partial hour that an Application Load Balancer is running and the number of Load Balancer Capacity Units (LCU) used per hour.
  • You are charged for each hour or partial hour that a Network Load Balancer is running and the number of Load Balancer Capacity Units (LCU) used by Network Load Balancer per hour.
  • You are charged for each hour or partial hour that a Gateway Load Balancer is running and the number of Gateway Load Balancer Capacity Units (GLCU) used by Gateway Load Balancer per hour.
  • You are charged for each hour or partial hour that a Classic Load Balancer is running and for each GB of data transferred through your load balancer.

AWS Direct Connect

 

  • Using Direct Connect, data can now be delivered through a private network connection between AWS and your datacenter or corporate network.
  • Direct Connect links your internal network to a Direct Connect location over a standard Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to a Direct Connect router. With this connection, you can create virtual interfaces directly to public AWS services or to Amazon VPC.
  • 1 Gbps, 10 Gbps, and 100 Gbps connections are available.
  • Supports hosted connection capacities of 1, 2, 5 and 10 Gbps. 1, 2, 5 and 10 Gbps hosted connections will provide customers with higher capacities that were previously only available via dedicated connections.
  • Amazon Direct Connect also supports AWS Transit Gateway, aside from configuring Site-to-Site VPN connections. With this feature, customers can connect thousands of Amazon VPCs in multiple AWS Regions to their on-premises networks using 1/2/5/10 Gbps AWS Direct Connect connections.

AWS Training AWS Direct Connect

Beneficial Use Cases

    • When transferring large data sets.
    • When developing and using applications that use real-time data feeds.
    • When building hybrid environments that satisfy regulatory requirements requiring the use of private connectivity.

Setting Up Methods

Port speed

Method

1 Gbps or higher

Connect directly to an AWS device from your router at an AWS Direct Connect location.

1 Gbps or higher

Work with a partner in the AWS Partner Network or a network provider to connect a router from your data center, office, or colocation environment to an AWS Direct Connect location. The network provider does not have to be a member of the APN to connect you.

Less than 1 Gbps

Work with a partner in the AWS Partner Network who can create a hosted connection for you. Sign up for AWS and then follow the instructions to accept your hosted connection.

Components

    • Connections – Create a connection in an AWS Direct Connect location to establish a network connection from your premises to an AWS Region. From Direct Connect you can connect to all AZs within the region.
    • Virtual interfaces – Create a virtual interface to enable access to AWS services. A public virtual interface enables access to public services, such as S3. A private virtual interface enables access to your VPC.
  • To access public resources in a remote Region, you must set up a public virtual interface and establish a Border Gateway Protocol session.
  • You can create a Direct Connect gateway in any public Region. Use it to connect your Direct Connect connection over a private virtual interface to VPCs in your account that are located in different Regions.
  • To provide for failover, request and configure two dedicated connections to AWS. These connections can terminate on one or two routers in your network. There are different configuration choices available:
    • Active/Active (BGP multipath) – This is the default configuration, where both connections are active. If one connection becomes unavailable, all traffic is routed through the other connection.
    • Active/Passive (failover) – One connection is handling traffic, and the other is on standby. If the active connection becomes unavailable, all traffic is routed through the passive connection.
  • Autonomous System numbers (ASN) are used to identify networks that present a clearly defined external routing policy to the Internet.

Cross Connects

    • After you have downloaded your Letter of Authorization and Connecting Facility Assignment (LOA-CFA), you must complete your cross-network connection, also known as a cross connect.
    • If you already have equipment located in a Direct Connect location, contact the appropriate provider to complete the cross connect.
    • If you do not already have equipment located in a Direct Connect location, you can work with one of the partners in the AWS Partner Network to help you to connect to an AWS Direct Connect location.

Virtual Interfaces

    • You must create a virtual interface to begin using your Direct Connect connection.
    • You can configure multiple virtual interfaces on a single AWS Direct Connect connection.
      • For private virtual interfaces, you need one private virtual interface for each VPC to connect to from the AWS Direct Connect connection, or you can use a AWS Direct Connect gateway.
    • Prerequisites
      • Connection: The Direct Connect connection or link aggregation group for which you are creating the virtual interface.
      • Virtual interface name: A name for the virtual interface.
      • Virtual interface owner
      • (Private virtual interface only) Connection to
      • VLAN: A unique virtual local area network tag that’s not already in use on your connection.
      • Address family: Whether the BGP peering session will be over IPv4 or IPv6.
      • Peer IP addresses: A virtual interface can support a BGP peering session for IPv4, IPv6, or one of each (dual-stack). You cannot create multiple BGP sessions for the same IP addressing family on the same virtual interface
      • BGP information: A public or private Border Gateway Protocol Autonomous System Number for your side of the BGP session, and an MD5 BGP authentication key.
      • (Public virtual interface only) Prefixes you want to advertise: Public IPv4 routes or IPv6 routes to advertise over BGP. You must advertise at least one prefix using BGP.
    • The maximum transmission unit (MTU) of a network connection is the size, in bytes, of the largest permissible packet that can be passed over the connection. The MTU of a virtual private interface can be either 1500 or 9001 (jumbo frames). The MTU of a transit virtual interface for VPC Transit Gateways associated with Direct Connect gateways can be either 1500 or 8500 (jumbo frames). A public virtual interface doesn’t support jumbo frames.
    • Jumbo frames are supported on virtual private interfaces attached to a virtual private gateway or a Direct Connect gateway. Jumbo frames apply only to propagated routes from Direct Connect.

Link Aggregation Groups (LAG)

    • A logical interface that uses the Link Aggregation Control Protocol to aggregate multiple connections at a single Direct Connect endpoint, allowing you to treat them as a single, managed connection.
    • All connections in the LAG must use the same bandwidth.
    • You can have a maximum of four connections in a LAG. Each connection in the LAG counts towards your overall connection limit for the Region.
    • All connections in the LAG must terminate at the same Direct Connect endpoint.
    • Can aggregate up to 4 Direct Connect ports into a single connection using LAG.
    • All connections in a LAG operate in Active/Active mode.
    • It will only be available for dedicated 1G and 10G connections.

Direct Connect Gateways

    • Use a Direct Connect gateway to connect your Direct Connect connection over a private virtual interface to one or more VPCs in your account that are located in the same or different Regions.
    • It is a globally available resource.
    • Direct Connect gateway also enables you to connect between your on-premises networks and Amazon Virtual Private Cloud (Amazon VPC) in any commercial AWS Region except in China regions.
    • Prior to multi-account support, you could only associate Amazon VPCs with a Direct Connect gateway in the same AWS account. With the launch of multi-account support for Direct Connect gateway, you can associate up to 10 Amazon VPCs from multiple accounts with a Direct Connect gateway. The VPCs must be owned by AWS Accounts that belong to the same AWS payer account ID.

Security

    • Use IAM for controlling access.

Monitoring

    • You can optionally assign tags to your Direct Connect resources to categorize or manage them. A tag consists of a key and an optional value, both of which you define.
    • CloudTrail captures all API calls for AWS Direct Connect as events.
    • Set up CloudWatch alarms to monitor metrics.

Pricing

    • You pay only for the network ports you use and the data you transfer over the connection.
    • Pricing is per port-hour consumed for each port type. Data transfer out over AWS Direct Connect is charged per GB. Data transfer IN is $0.00 per GB in all locations.