Sunday, August 25, 2019

Azure Kubernettes

Kubernetes is an open-source orchestration software for deploying, managing and scaling containers.

As applications grow to span multiple containers deployed across multiple servers, operating them becomes more complex. To manage this complexity, Kubernetes provides an open-source API that controls how and where those containers will run.

It also provides the ability to orchestrate a cluster of virtual machines and schedule containers to run on those virtual machines based on their available compute resources and the resource requirements of each container. Containers are grouped into pods, the basic operational unit for Kubernetes, which can be scaled to your desired state.

Kubernetes helps manage service discovery, incorporate load balancing, track resource allocation, scale based on compute utilisation, check the health of individual resources and enable apps to self-heal by automatically restarting or replicating containers.

Key Concepts 

Pods

If you think about a pea pod, there can be one or many peas inside it. Treating each pea as a container, this translates to a pod being an encapsulation of an application container (or, in some cases, multiple containers).
As per the formal definition, a pod is an encapsulation of an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A pod represents a unit of deployment, a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources. A more detailed explanation is available at https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/.
One key point to remember is that pods are ephemeral, they are created and at times they die as well. In that regard, any application that directly accesses pods will eventually fail when the pod dies. Instead, you should always interact with Services, when trying to access containers deployed within Kubernetes.

Services


Due to the ephemeral nature of pods, any application that is directly accessing a pod will eventually suffer a downtime (when the pod dies, and another is created to replace it). To get around this, Kubernetes provides Services.
Think of a Service to be like an application load balancer, it provides a front end for your container, and then routes the traffic to a pod running that container. Since your applications are always connecting to a Service (the properties for the Service remain unchanged during its lifetime), they are shielded from any pod deaths. For information about services, refer to https://kubernetes.io/docs/concepts/services-networking/service/.

Namespaces

Namespaces provide a logical way of grouping your Kubernetes cluster. This allows you to provide access to different resources to different sets of users. Namespaces also provide a scope for names. Names must be unique within a namespace however they do not need to be unique across namespaces. A more in-depth description can be found at https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/


Kubernetes Control Plane (master)

The Kubernetes master (this is a collection of processes) ensures the Kubernetes cluster is working as expected by maintaining the clusters desired state.


Kubernetes Nodes

The nodes are where the containers and workflows are run. The nodes can be virtual machines, physical machines etc. The Kubernetes master controls each node.


Using Azure CLI to create a cluster

1. log in :

az login

2. Activate the correct subscription when multiple subscriptions (optional step)
az account list --refresh --output table
az account set -s <YOUR-CHOSEN-SUBSCRIPTION-NAME>

3. Create a Resource Group
az group create \
              --name=<RESOURCE-GROUP-NAME> \
              --location=centralus \
              --output table

4.Choose a cluster name

mkdir <CLUSTER-NAME>
cd <CLUSTER-NAME>

5. Create an ssh key to secure the cluster
ssh-keygen -f ssh-key-<CLUSTER-NAME>

6. Create cluster
az aks create --name <CLUSTER-NAME> \
              --resource-group <RESOURCE-GROUP-NAME> \
              --ssh-key-value ssh-key-<CLUSTER-NAME>.pub \
              --node-count 3 \
              --node-vm-size Standard_D2s_v3 \
              --output table

7.If you’re using the Azure CLI locally, install kubectl, a tool for accessing the Kubernetes API from the commandline:

az aks install-cli

Note: kubectl is already installed in Azure Cloud Shell.

8.Get credentials from Azure for kubectl to work:

az aks get-credentials \
             --name <CLUSTER-NAME> \
             --resource-group <RESOURCE-GROUP-NAME> \
             --output table

9.Check if your cluster is fully functional

kubectl get node

The response should list three running nodes and their Kubernetes versions! Each node should have the status of Ready.

Note: If you create the cluster using the Azure Portal you must enable RBAC. RBAC is enabled by default when using the command line tools.

No comments:

Post a Comment