AWS- EKS Chapter-1

ABHISHEK KUMAR
3 min readJun 21, 2022

--

Create, Delete -> EKS Cluster & Node Groups

Step-01: Create EKS Cluster using eksctl

  • It will take 15 to 20 minutes to create the Cluster Control Plane
# Create Cluster
eksctl create cluster --name=eksdemo1 \
--region=us-east-1 \
--zones=us-east-1a,us-east-1b \
--without-nodegroup
# Get List of clusters
eksctl get cluster

Step-02: Create & Associate IAM OIDC Provider for our EKS Cluster

  • To enable and use AWS IAM roles for Kubernetes service accounts on our EKS cluster, we must create & associate OIDC identity providers.
  • To do so using eksctl we can use the below command.
  • Use the latest eksctl version (as of today the latest version is 0.21.0)
# Template
eksctl utils associate-iam-oidc-provider \
--region region-code \
--cluster <cluter-name> \
--approve
# Replace with region & cluster name
eksctl utils associate-iam-oidc-provider \
--region us-east-1 \
--cluster eksdemo1 \
--approve

Step-03: Create EC2 Keypair

  • Create a new EC2 Keypair with the name as kube-demo
  • This keypair we will use when creating the EKS NodeGroup.
  • This will help us to log in to the EKS Worker Nodes using Terminal.

Step-04: Create Node Group with additional Add-Ons in Public Subnets

  • These add-ons will automatically create the respective IAM policies for us within our Node Group role.
# Create Public Node Group   
eksctl create nodegroup --cluster=eksdemo1 \
--region=us-east-1 \
--name=eksdemo1-ng-public1 \
--node-type=t3.medium \
--nodes=2 \
--nodes-min=2 \
--nodes-max=4 \
--node-volume-size=20 \
--ssh-access \
--ssh-public-key=kube-demo \
--managed \
--asg-access \
--external-dns-access \
--full-ecr-access \
--appmesh-access \
--alb-ingress-access

Step-05: Verify Cluster & Nodes

Verify NodeGroup subnets to confirm EC2 Instances are in Public Subnet

  • Verify the node group subnet to ensure it created in public subnets
  • Go to Services -> EKS -> eksdemo -> eksdemo1-ng1-public
  • Click on the Associated subnet in the Details tab
  • Click on Route Table Tab.
  • We should see that internet route via Internet Gateway (0.0.0.0/0 -> igw-xxxxxxxx)

Verify Cluster, NodeGroup in EKS Management Console

  • Go to Services -> Elastic Kubernetes Service -> eksdemo1

List Worker Nodes

# List EKS clusters
eksctl get cluster
# List NodeGroups in a cluster
eksctl get nodegroup --cluster=<clusterName>
# List Nodes in current kubernetes cluster
kubectl get nodes -o wide
# Our kubectl context should be automatically changed to new cluster
kubectl config view --minify

Verify Worker Node IAM Role and list of Policies

  • Go to Services -> EC2 -> Worker Nodes
  • Click on IAM Role associated with EC2 Worker Nodes

Verify Security Group Associated to Worker Nodes

  • Go to Services -> EC2 -> Worker Nodes
  • Click on Security Groups associated with EC2 Instance which contains remote in the name.

Verify CloudFormation Stacks

  • Verify Control Plane Stack & Events
  • Verify NodeGroup Stack & Events

Delete EKS Cluster & Node Groups

Step-05: Delete Node Group

  • We can delete a node group separately using below eksctl delete nodegroup
# List EKS Clusters
eksctl get clusters
# Capture Node Group name
eksctl get nodegroup --cluster=<clusterName>
eksctl get nodegroup --cluster=eksdemo1
# Delete Node Group
eksctl delete nodegroup --cluster=<clusterName> --name=<nodegroupName>
eksctl delete nodegroup --cluster=eksdemo1 --name=eksdemo1-ng-public1

Step-06: Delete Cluster

  • We can delete clusters using eksctl delete cluster
# Delete Cluster
eksctl delete cluster <clusterName>
eksctl delete cluster eksdemo1

--

--

ABHISHEK KUMAR
ABHISHEK KUMAR

Written by ABHISHEK KUMAR

DevOps/Cloud | 2x AWS Certified | 1x Terraform Certified | 1x CKAD Certified | Gitlab

No responses yet