- Secured and managed Kubernetes services with auto-scaling and multi-cluster support
Features
- Can be configured to automatically scale node pool and clusters across multiple node pools based on changing workload requirements.
- Auto-repair can be enabled to do health checks on node
- Choose clusters tailored to your requirements based on:
- Availability
- Version Stability
- Isolation
- Pod Traffic requirements
- Enable Cloud Logging and Cloud Monitoring via simple checkbox configurations.
- Kubernetes version can be enabled to auto-upgrade with the latest release patch.
- Supports Docker container format.
- Integrates with Google Container Registry so you can easily access your private Docker images.
Kubernetes Cluster Architecture
- kubectl
- Is the main CLI tool for running commands and managing Kubernetes clusters.
- Cluster
- All of the Kubernetes objects that represent your containerized applications run on top of a cluster.
- Node
- Nodes are the worker machines that run your containerized applications and other workloads.
- A cluster typically has one or more nodes,
- Kubernetes runs your workload by placing containers into Pods to run on Nodes.
- Node Pool
- A node pool is a set of nodes within a cluster that have similar configurations.
- Cluster Autoscaler
- Cluster Autoscaler automatically resizes the number of nodes in a given node pool, based on the demands of your workloads.
- Horizontal Pod Autoscaling
- HPA automatically scales the number of pods in response to
- your workload’s CPU or memory consumption
- custom metrics reported from within Kubernetes
- customer metrics reported externally.
- Cannot be used for workloads that cannot be scaled, such as DaemonSets.
- HPA automatically scales the number of pods in response to
- Pods
- Are the smallest deployable units of computing that you can create and manage in Kubernetes.
- Every pod has its own IP address.
- Deployment
- You describe the desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate.
- Service
- Serves as a load balancer to balance traffic across a set of Pods
- You are allowed to specify which type of Service you would like to use:
- ClusterIP: Exposes the Service on a cluster-internal IP.
- NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort).
- LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer.
- Daemon Set
- A DaemonSet ensures that all (or some) Nodes run a copy of a Pod.
- ConfigMaps
- ConfigMaps enable you to separate your configurations from your Pods and components, which helps keep your workloads portable.
GKE Sandbox
- Provides a second layer of security between containerized workloads on GKE.
- GKE Sandbox uses gVisor.
- You cannot enable GKE Sandbox on a default node pool.
- When using Sandbox, you must have at least 2 node pools.
- It is not possible to use accelerators such as GPUs or TPUs
Pricing
Pricing for Cluster Management
- One zonal cluster (single-zone or multi-zonal) per billing account is free.
- The fee is flat, irrespective of cluster size and topology—whether it is a single-zone cluster, multi-zonal cluster or regional cluster, all accrue the same flat fee per cluster.
- Billing is computed on a per-second basis for each cluster. The total amount is rounded to the nearest cent, at the end of each month.
- The fee does not apply to Anthos GKE clusters.
Pricing for worker node
- GKE uses Compute Engine instances for worker nodes in the cluster. You are billed for each of those instances according to Compute Engine’s pricing, until the nodes are deleted. Compute Engine resources are billed on a per-second basis with a one-minute minimum usage cost.
No comments:
Post a Comment