Kubernetes-based Setup on EKS
  • 11 Sep 2023
  • 2 Minutes to read
  • Dark
    Light
  • PDF

Kubernetes-based Setup on EKS

  • Dark
    Light
  • PDF

Article summary

About this Article
This article provides steps environment setup steps for deploying VSP on Kubernetes engine on AWS EKS.


Pre-requisites

The pre-requisites for VSP installation:

  1. EC2 machine with:
    1. kubectl
    2. helm
    3. awscli
    4. docker

The pre-requisites for Kubernetes engine deployment on AWS EKS

  1. AWS IAM User account with the role
  2. AmazonEC2FullAccess
  3. AmazonEKSFullAccess


EKS Setup Architecture


EC2 Machine Creation

  1. Access the AWS Dashboard: https://console.aws.amazon.com/ec2 using valid credentials
  2. Navigate to EC2 > Instances > Launch an instance. Provide an appropriate Name
  3. Under the Quick Start tab, select Ubuntu
  4. Select Instance type as required for CMS. Example: t2.medium
  5. Select the Number of instances as 1
  6. Click Create new key pair if required for authentication credentials
  7. Configure the Storage information as required and click Launch Instance


Installation

  1. Log in to the client machine created, using valid credentials
  2. Execute the commands below to install kubectl, awscli and helm
    1. kubectl Installation
      apt update
      sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl
      curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –-
      echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
      sudo apt-get update
      sudo apt-get install -y kubectl
      
    2. awscli Installation
      apt update
      sudo apt-get install awscli -y
    3. docker Installation
      apt update
      apt install docker.io -y
      docker -v
      
  3. To procure Access Key and Secret Access Key
    1. On the AWS console, navigate to My Security Credentials
    2. Click Create access key
    3. Log in to the client machine and configure aws access key and secret access key using the command below. Provide aws access key, secret access key and region when prompted
      aws configure
  4. To create EKS cluster on AWS Console. Create an IAM role for EKS control plane
    1. Navigate to IAM > Create role. Select the required service
    2. Navigate to AWS Service > EKS
    3. Navigate to EKS Cluster > Permissions
    4. Select AmazonEKSClusterPolicy  and click  Tags
    5. On the Review page, provide a role name. Click Create role
  5. Create a new role for worker node group
    1. Navigate to IAM > Create role. Select the required service
    2. Navigate to the required AWS Service > EC2. Click Permissions
    3. Attach the policies - AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy and AmazonEC2ContainerRegistryReadOnly.
    4. Click on Tag. On the Review page, provide a Role name. Click Create role
    5. Click Cluster > Create Cluster
    6. Provide the Name, Kubernetes Version and Cluster Service role. Click Next
    7. Select VPC, Subnets, NSG and Public Access. Execute the remaining steps to create the Cluster, with the default values
    8. Once the EKS cluster becomes Active, click on cluster name
    9. Click on Configuration tab. Click Compute > Add node group
    10. Provide Name and IAM Role. Click Next
    11. Select the required instance type. Click Create
    12. Select the required Subnets and SSH keypairs. Create the Node group
  6. Once the Node Group is created, log in to the client machine
  7. Fetch the kubeconfigfile from the Control plane using the commands below:
    aws eks --region us-east-1 update-kubeconfig --name gitlab
    kubectl get nodes  # Should list 2 Machine in Ready State
    
  8. Install Helm 2 or Helm 3
    1. Helm 2 Installation:
      wget https://get.helm.sh/helm-v2.16.12-linux-amd64.tar.gz 
      tar -zxvf  helm-v2.16.12-linux-amd64.tar.gz
      cd linux-amd64/
      mv helm /usr/local/bin
      helm init --stable-repo-url https://charts.helm.sh/stable
      helm init   # To create Tiller Pod
      helm version
      #To Create Service Account and assign permission to the Tiller pod, so that it deploys workload on Kubernetes
      kubectl create sa tiller –namespace kube-system
      kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
      kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
      kubectl get all --all-namespaces | grep tiller #Verification
      
    2. Helm 3 Installation: Tiller Pod is not required for Helm 3
      wget https://get.helm.sh/helm-v3.5.2-linux-amd64.tar.gz
      tar -zxvf  helm-v3.5.2-linux-amd64.tar.gz
      cd linux-amd64/
      mv helm /usr/local/bin
      helm version
      

Delete Node Group or Fargate Profile

  1. Navigate to EKS > Clusters. Click gitlab
  2. Navigate to Configuration > Compute. Delete the node group or fargate profile to delete the cluster. Unless node group and fargate are deleted EKS cluster cannot be deleted



Was this article helpful?