Skip to content

Latest commit

 

History

History
214 lines (151 loc) · 8.39 KB

File metadata and controls

214 lines (151 loc) · 8.39 KB

Lab 1: Create AKS Cluster

Prerequisites

  1. Azure Account

Instructions

  1. Login to Azure Portal at http://portal.azure.com.

  2. Open the Azure Cloud Shell

    Azure Cloud Shell

  3. The first time Cloud Shell is started will require you to create a storage account.

  4. Once your cloud shell is started, clone the workshop repo into the cloud shell environment

    git clone https://github.com/Azure/kubernetes-hackfest
    
    cd kubernetes-hackfest/labs/create-aks-cluster

    Note: In the cloud shell, you are automatically logged into your Azure subscription.

  5. Create a unique identifier suffix for resources to be created in this lab.

     export UNIQUE_SUFFIX=$USER$RANDOM

    *** Write Value To .bashrc to persist through lab

    echo export UNIQUE_SUFFIX=$UNIQUE_SUFFIX >> .bashrc

    *** Note this value and it will be used in the next couple labs. The variable may reset if your shell times out, so PLEASE WRITE IT DOWN. ***

  6. Create an Azure Resource Group in East US.

    export RGNAME=kubernetes-hackfest
    export LOCATION=eastus
    az group create -n $RGNAME -l $LOCATION 
  7. Create your AKS cluster in the resource group created above with 3 nodes, targeting Kubernetes version 1.10.3, with Container Insights, and HTTP Application Routing Enabled.

    • Use unique CLUSTERNAME
    export CLUSTERNAME=aks-$UNIQUE_SUFFIX

    The below command can take 10-20 minutes to run as it is creating the AKS cluster. Please be PATIENT and grab a coffee...

    az aks create -n $CLUSTERNAME -g $RGNAME -k 1.10.3 \
    --generate-ssh-keys -l $LOCATION \
    --node-count 3 \
    --enable-addons http_application_routing,monitoring
  8. Verify your cluster status. The ProvisioningState should be Succeeded

    az aks list -o table
    Name                 Location    ResourceGroup         KubernetesVersion    ProvisioningState    Fqdn
    -------------------  ----------  --------------------  -------------------  -------------------  -------------------------------------------------------------------
    ODLaks-v2-gbb-16502  eastus   ODL_aks-v2-gbb-16502  1.8.6                Succeeded odlaks-v2--odlaks-v2-gbb-16-b23acc-17863579.hcp.centralus.azmk8s.io
  9. Get the Kubernetes config files for your new AKS cluster

    az aks get-credentials -n $CLUSTERNAME -g $RGNAME
  10. Verify you have API access to your new AKS cluster

    Note: It can take 5 minutes for your nodes to appear and be in READY state. You can run watch kubectl get nodes to monitor status.

    kubectl get nodes
     NAME                       STATUS    ROLES     AGE       VERSION
     aks-nodepool1-26522970-0   Ready     agent     33m       v1.10.3

    To see more details about your cluster:

    kubectl cluster-info
    Kubernetes master is running at https://cluster-dw-kubernetes-hackf-80066e-a44f3eb0.hcp.eastus.azmk8s.io:443
    
    addon-http-application-routing-default-http-backend is running at https://cluster-dw-kubernetes-hackf-80066e-a44f3eb0.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/addon-http-application-routing-default-http-backend/proxy
    
    addon-http-application-routing-nginx-ingress is running at http://168.62.191.18:80 http://168.62.191.18:443
    
    Heapster is running at https://cluster-dw-kubernetes-hackf-80066e-a44f3eb0.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/heapster/proxy
    
    KubeDNS is running at https://cluster-dw-kubernetes-hackf-80066e-a44f3eb0.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    
    kubernetes-dashboard is running at https://cluster-dw-kubernetes-hackf-80066e-a44f3eb0.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy 

    You should now have a Kubernetes cluster running with 3 nodes. You do not see the master servers for the cluster because these are managed by Microsoft. The Control Plane services which manage the Kubernetes cluster such as scheduling, API access, configuration data store and object controllers are all provided as services to the nodes.

Troubleshooting / Debugging

To further debug and diagnose cluster problems, use cluster-info dump command

cluster-info dump dumps cluster info out suitable for debugging and diagnosing cluster problems. By default, dumps everything to stdout. You can optionally specify a directory with --output-directory. If you specify a directory, kubernetes will build a set of files in that directory. By default only dumps things in the 'kube-system' namespace, but you can switch to a different namespace with the --namespaces flag, or specify --all-namespaces to dump all namespaces.

The command also dumps the logs of all of the pods in the cluster, these logs are dumped into different directories based on namespace and pod name.

kubectl cluster-info dump

Docs / References

Troubleshoot Kubernetes Clusters

Lab 1a: Create AKS Cluster Namespaces

This lab creates namespaces that reflect a representative example of an organization's environments. In this case DEV, UAT and PROD. We will also apply the appopriate permissions, limits and resource quotas to each of the namespaces.

Prerequisites

  1. Build AKS Cluster (from above)

Instructions

  1. Create Three Namespaces

    # Create namespaces
    kubectl apply -f create-namespaces.yaml
    
    # Look at namespaces
    kubectl get ns
  2. Assign CPU, Memory and Storage Limits to Namespaces

    # Create namespace limits
    kubectl apply -f namespace-limitranges.yaml
    
    # Get list of namespaces and drill into one
    kubectl get ns
    kubectl describe ns uat
  3. Assign CPU, Memory and Storage Quotas to Namespaces

    # Create namespace quotas
    kubectl apply -f namespace-quotas.yaml
    
    # Get list of namespaces and drill into one
    kubectl get ns
    kubectl describe ns dev
  4. Test out Limits and Quotas in dev Namespace

    # Test Limits - Forbidden due to assignment of CPU too low
    kubectl run nginx-limittest --image=nginx --restart=Never --replicas=1 --port=80 --requests='cpu=100m,memory=256Mi' -n dev
    # Test Limits - Pass due to automatic assignment within limits via defaults
    kubectl run nginx-limittest --image=nginx --restart=Never --replicas=1 --port=80 -n dev
    # Check running pod and dev Namespace Allocations
    kubectl get po -n dev
    kubectl describe ns dev
    # Test Quotas - Forbidden due to memory quota exceeded
    kubectl run nginx-quotatest --image=nginx --restart=Never --replicas=1 --port=80 --requests='cpu=500m,memory=1Gi' -n dev
    # Test Quotas - Pass due to memory within quota
    kubectl run nginx-quotatest --image=nginx --restart=Never --replicas=1 --port=80 --requests='cpu=500m,memory=512Mi' -n dev
    # Check running pod and dev Namespace Allocations
    kubectl get po -n dev
    kubectl describe ns dev
  5. Clean up quotas

    kubectl delete -f namespace-limitranges.yaml
    kubectl delete -f namespace-quotas.yaml
    
    kubectl describe ns dev
    kubectl describe ns uat
    kubectl describe ns prod
    

Troubleshooting / Debugging

  • The limits and quotas of a namespace can be found via the kubectl describe ns <...> command. You will also be able to see current allocations.
  • If pods are not deploying then check to make sure that CPU, Memory and Storage amounts are within the limits and do not exceed the overall quota of the namespace.

Docs / References