INSTALLATION
-
Log in to the client machine created, using valid credentials
-
Execute the commands below to install kubectl, awscli and helm
-
apt update
-
kubectl Installation:
-
sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl
-
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –-
-
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
-
sudo apt-get update
-
sudo apt-get install -y kubectl
-
-
awscli Installation:
-
sudo apt-get install awscli -y
-
-
docker Installation:
-
apt update
-
apt install docker.io -y
-
docker -v
-
-
-
Procure Access Key and Secret Access Key
-
On the AWS console, navigate to My Security Credentials
-
Click Create access key
-
Log in to the client machine and configure aws access key and secret access key using the commands below:
-
aws configure
-
-
-
On the AWS console, create EKS cluster
-
Create an IAM role for EKS control plane
-
Navigate to IAM > Create role. Select the required service
-
Navigate to AWS Service > EKS
-
Navigate to EKS Cluster > Permissions
-
Click AmazonEKSClusterPolicy > Tags
-
On the Review page, provide a role name. Click Create role
-
-
Create a new role for worker node group
-
Navigate to IAM > Create role. Select the required service
-
Navigate to the required AWS Service > EKS. Click Permissions
-
Attach the policies - AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy and AmazonEC2ContainerRegistryReadOnly.
-
Click on Tag. On the Review page, provide a Role name. Click Create role
-
Click Cluster > Create Cluster
-
Provide the Name, Kubernetes Version and Cluster Service role. Click Next
-
Select VPC, Subnets, NSG and Public Access. Execute the remaining steps to create the Cluster, with the default values
-
Once the EKS cluster becomes Active, click on cluster name
-
Click on Configuration tab. Click Compute > Add node group
-
Provide Name and IAM Role. Click Next
-
Select the required instance type. Click Create
-
Select the required Subnets and SSH keypairs. Create the Node group
-
-
Once the Node Group is created, log in to the client machine
-
Execute the commands below:
-
Fetch the kubeconfig file from the Control plane using the commands below:
-
aws eks --region us-east-1 update-kubeconfig --name gitlab
-
kubectl get nodes
-
The above command lists two machines in Ready state
-
-
-
Install either Helm 2 OR Helm 3
-
Helm 2 Installation:
-
-
tar -zxvf helm-v2.16.12-linux-amd64.tar.gz
-
cd linux-amd64/
-
mv helm /usr/local/bin
-
helm init --stable-repo-url https://charts.helm.sh/stable
-
Create Tiller Pod:
helm init
-
helm version
-
Create Service Account and assign permission to the Tiller pod, so that it deploys workload on Kubernetes. Execute the commands below:
-
kubectl create sa tiller –namespace kube-system
-
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
-
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
-
Verification:
kubectl get all --all-namespaces | grep tiller
-
-
-
Helm 3 Installation: Tiller Pod is not required for Helm 3
-
-
tar -zxvf helm-v3.5.2-linux-amd64.tar.gz
-
cd linux-amd64/
-
mv helm /usr/local/bin
-
helm version
-
.