Real-time End-To-End DevOps project: Deploying an EKS Cluster with Terraform and Jenkins

Efficient EKS Deployment: Terraform and Jenkins in Action

Sayantan Samanta
20 min readNov 4, 2023

Today’s plan

We will develop the terraform code. The code will be pushed to SCM Tool, here in my case GitHub. This will trigger the Jenkins pipeline and then this pipeline is going to deploy the changes to the AWS Cloud platform thus creating EKS Cluster with the help of Terraform.

Prerequisites

  1. Understanding of Terraform
  2. Understanding of Kubernetes
  3. AWS account
  4. Terraform Installation
  5. Access Keys

The Workflow

  1. Create an EC2 instance + deploy Jenkins using Terraform
  2. Write terraform code for EKS Cluster
  3. Push the code on GitHub
  4. Create a Jenkins Pipeline → Deploy EKS cluster
  5. Deploy the changes to AWS.
  6. Implement a deployment file → using Kubectl → Nginx server → Accessible to the public.

Configure AWS CLI

Check my blog for creating access key and secret key

https://sayantansamanta098.medium.com/configure-jenkins-server-using-terraform-f78aac234187

After retrieving both the keys, use the ‘aws configure’ command and pass the keys and region.

GitHub

Configure S3 as Remote Backend and Terraform state Locking using DynamoDB

You must have s3 bucket from the start.

This Terraform configuration file sets up an AWS provider in the “ap-south-1” region and creates two AWS resources: an S3 bucket and a DynamoDB table.

1. Provider Configuration: I have configured the AWS provider to operate in the “ap-south-1” region. This sets the default region for the AWS resources you create in subsequent Terraform files.

2. S3 Bucket Creation: I am creating an S3 bucket with the name “sayantan-cicd-tf-eks”.

3. DynamoDB Table Creation: am creating a DynamoDB table named “terraform-lock.”

This configuration file is typically meant to be run before other `.tf` files that create AWS resources and use the “ap-south-1” region as a default setting for the AWS provider. The S3 bucket and DynamoDB table can be used for various purposes, such as storing Terraform state files and managing locking mechanisms for concurrent Terraform executions.

Configure the Remote Backend

The provided Terraform configuration block is used to configure a backend for my Terraform project. In this case, it’s configuring the S3 backend.

On a special note:

Don’t use variables here, give the value directly.

I have already created s3 bucket and DynamoDB table so just pass the necessary attributes along with these.

Backend Configuration: It specifies the use of the S3 backend for storing and managing my Terraform state.
The `bucket` attribute specifies the S3 bucket where the Terraform state file will be stored. In this case, it’s set to “sayantan-cicd-tf-eks”, which should match the bucket name where my Terraform state is stored.
The `key` attribute specifies the path and name of the Terraform state file within the S3 bucket. It’s set to “jenkins/terraform.tfstate”.
The `region` attribute sets the AWS region where the S3 bucket is located. It’s set to “ap-south-1”
The `encrypt` attribute is set to “true”, which means that the Terraform state stored in S3 will be encrypted.
The `dynamodb_table` attribute specifies the DynamoDB table to use for state locking and consistency. It’s set to “terraform-lock”, which should match the DynamoDB table name defined in my previous Terraform configuration.

This backend configuration is crucial for managing the Terraform state in a collaborative or production environment. It ensures that the state is stored securely, and it can also provide locking to prevent concurrent state modifications.

go to the workspace for creating s3 bucket and dynamoDB.

Two resource is going to be created.

Check the AWS webconsole.

All good till now.

Create an EC2 instance

Configure the data source.

data.tf

One data source for ami. AMI is already present in AWS and we need to just fetch that.

One more for availability zones.

Create VPC for my ec2 instance.

Official docs for VPC modules

I will make some changes to it.

main.tf

In this Terraform configuration module “vpc,” a Virtual Private Cloud (VPC) is being created in AWS for hosting a Jenkins server. Here’s a summary of what’s happening in this module:

Source: The module uses the “terraform-aws-modules/vpc/aws” module to define the VPC.

VPC Name and CIDR: It provides a name for the VPC and its CIDR (Classless Inter-Domain Routing) block, specifying the IP address range for the VPC.

Availability Zones (AZs) The module uses data from AWS to dynamically fetch the names of available availability zones (AZs) in the region.

Public Subnets: It specifies the public subnets to be created within the VPC, indicating that instances launched in these subnets should have public IP addresses and be accessible from the internet.

Enable DNS Hostnames: DNS hostnames are enabled for instances within the VPC.

Tags: The VPC and public subnets are tagged for better organization and identification. Tags include “Name,” “Terraform,” and “Environment” labels.

Public Subnet Tags: The public subnets are further tagged with a “Name” label for clear identification.

In summary, this module creates a VPC in AWS with specified public subnets across available availability zones, allowing instances launched in these subnets to have public IP addresses and enabling DNS hostnames. It also applies appropriate tags for organization and identification.

variables.tf

terraform.tfvars

Till this, it is enough to create a VPC.

Let’s check what are the things that will get created.

apply the code

Now we will create security group

Security Group

Official doc

Will add it and make changes in main.tf

main.tf

The Terraform configuration module “sg” is being used to create a security group for a Jenkins server in an AWS VPC. Here’s a summary of what’s happening in this module:

Source: The module uses the “terraform-aws-modules/security-group/aws” module to define the security group.

Name and Description: It provides a name and description for the security group, making it identifiable and describing its purpose.

VPC Association: The security group is associated with a VPC (Virtual Private Cloud) using the VPC ID obtained from another module called “vpc.”

Ingress Rules: It defines ingress (inbound) rules for port 8080 (HTTP) and port 22 (SSH) to allow traffic from any source (0.0.0.0/0). This enables incoming web and SSH access to the Jenkins server.

Egress Rules: It specifies egress (outbound) rules allowing all outbound traffic to any destination.

Tags: The security group is tagged with a “Name” to make it identifiable and for organizational purposes.

In summary, this module sets up a security group with specific ingress and egress rules to control traffic to and from a Jenkins server within an AWS VPC. The security group allows HTTP and SSH access from any source and permits all outbound traffic.

Have used a new module, so again use terraform init to download the module

check the plan and apply.

Now check in the AWS console.

We can see the security group named “jenkins-sg” is created and we have desired inbound and outbound rules as well.

All good till now.

Now create ec2-instance

First create a key-pair

Official doc

will make changes to it

variables.tf

terraform.tfvars

A Big Correction (updated)

Reason behind it is that, while use the pipeline script and try to validate the code of EKS using terraform and even occurred while using terraform plan, the following error occurred in my case

And I later used t2.small and it worked really smooth in ever job build.

Will come to this part later, but don’t forget to use t2.small from the start.

main.tf

In this Terraform configuration module “ec2_instance,” an EC2 instance for a Jenkins server is being created in AWS. Here’s a summary of what’s happening in this module:

  • Source: The module uses the “terraform-aws-modules/ec2-instance/aws” module to define the EC2 instance.
  • Instance Name: It specifies a name for the EC2 instance.
  • Instance Type: The instance type (e.g., t2.small here) is defined to specify the compute and memory capacity.
  • Key Pair: It associates the EC2 instance with a key pair named “jenkins-server-key,” allowing SSH access.
  • Monitoring: It enables instance monitoring for detailed performance data.
  • Security Group: The security group for the EC2 instance is defined, which is obtained from a module named “sg” created earlier.
  • Subnet: The instance is launched in the first public subnet of the VPC, which was created in a previous module.
  • Public IP: The instance is configured to have a public IP address to enable external access.
  • User Data: The EC2 instance is configured with user data by executing a script named “jenkins-install.sh” during instance launch. This script is used for Jenkins server setup. We have to create it in our current workspace.
  • Availability Zone: The instance is placed in the first available availability zone.

data.aws_availability_zones.azs.names will give a list and we want to launch our instance in the first az

data.aws_availability_zones.azs.names[0]

  • Tags: Tags are applied for better organization and identification, including “Name,” “Terraform,” and “Environment” labels.

In summary, this module sets up an EC2 instance for a Jenkins server, specifying its configuration, security, and networking settings, as well as executing custom user data during launch. Appropriate tags are applied for organization and identification.

create the script to be used as the user data

jenkins-install.sh

Now apply

check AWS webconsole

All good till now.

Browse http://Public IPv4 address:8080

Connect to the instance

Go to /var/lib/jenkins/secrets/initialAdminPassword and read the password

Then paste the Adminstrator password

Install suggested plugin

These are the files that should not be pushed in github. add them in .gitignore file.

GO to the s3_bucket diectory and there also create .gitignore file

I am also adding the state files in the s3_bucket/.gitignore file.

Now let’s properly organize the directories to manage the codes easily.

In the main workspace — jenkins-terraform-eks-nginx made two subdirectories to manage the code.

Now going to create a seperate workspace for EKS cluster

and I will use remote backend too there and will use same s3 bucket that we have created in past, just create one more DynamoDB table.

One more DynamoDB table eks-terraform-lock is created.

Create one more workspace for eks

Write terraform code for EKS Cluster

Remote Backend for EKS is configured.

Create a new VPC

data.tf

main.tf

variables.tf

terraform.tfvars

The code is a configuration for creating an Amazon Virtual Private Cloud (VPC) using Terraform, specifically using the `terraform-aws-modules/vpc/aws` module. Let’s summarize the key points and workflow of this Terraform module:

Module Inclusion: The code is using a Terraform module named “vpc,” which is sourced from the “terraform-aws-modules/vpc/aws” module.

VPC Configuration: The VPC is named “eks-vpc.”The VPC’s Classless Inter-Domain Routing (CIDR) block is defined using the variable `var.vpc_cidr`.

Availability Zones (AZs): The module utilizes data from AWS to automatically determine the available AZs in your selected region using `data.aws_availability_zones.azs.names`.

Subnet Configuration:There are both public and private subnets configured for this VPC. The public subnets are specified using the variable `var.public_subnets`. The private subnets are specified using the variable `var.private_subnets`.

DNS Configuration: DNS hostnames are enabled for the VPC using `enable_dns_hostnames = true`.

NAT Gateway Configuration:NAT gateways are enabled, indicating that private subnets will use NAT gateways for outbound traffic using `enable_nat_gateway = true`. A single NAT gateway is configured for all private subnets with `single_nat_gateway = true`.

Tagging: The VPC and its subnets are tagged with specific key-value pairs. For example, they are tagged with “kubernetes.io/cluster/my-eks-cluster” set to “shared” to associate these resources with an EKS (Amazon Elastic Kubernetes Service) cluster.

Role-Based Tagging: Public and private subnets are tagged with role-specific key-value pairs. For instance, “kubernetes.io/role/elb” is set to 1 for public subnets, and “kubernetes.io/role/internal-elb” is set to 1 for private subnets.

When I will apply this Terraform configuration, it will create the specified VPC with the defined CIDR block, subnets, DNS settings, NAT gateways, and tags in your AWS account.

The module automatically determines the available AZs, which is useful for distributing resources and ensuring high availability.

Use the module for EKS cluster

Offcial Doc

main.tf

The code is a Terraform configuration for creating an Amazon Elastic Kubernetes Service (EKS) cluster using the `terraform-aws-modules/eks/aws` module.

Module Inclusion: The code is using a Terraform module named “eks,” which is sourced from the “terraform-aws-modules/eks/aws” module.

EKS Cluster Configuration: The EKS cluster is named “my-eks-cluster.” The cluster version is set to “1.24.” The cluster endpoint is configured to have public access with `cluster_endpoint_public_access = true`.

VPC and Subnet Configuration: The VPC for the EKS cluster is specified using `vpc_id = module.vpc.vpc_id`, indicating that this EKS cluster will be created within the VPC created by the previous configuration.

Node Group Configuration: A managed node group called “nodes” is defined for the EKS cluster. It specifies the minimum, maximum, and desired number of worker nodes in the group using `min_size`, `max_size`, and `desired_size`. The instance type for worker nodes is set to “t2.small.”

Tags: The EKS cluster is tagged with key-value pairs, including “Environment” set to “dev” and “Terraform” set to “true.”

When I will apply this Terraform configuration, it will create an Amazon EKS cluster with the specified name and version in my AWS account. The EKS cluster is associated with the VPC created in the previous module, and its worker nodes will be launched in the private subnets defined in that VPC. The cluster will have public access to its API server endpoint. The specified worker node group will be created and managed by EKS.

Add .gitignore

Don’t apply from here, we will create a jenkins pipeline to create eks cluster.

First let’s push the code in Github

Create an empty GitHub repo

Then you will get instructions with the details from GitHub itself.

Code is pushed in GitHub.

Now come to Jenkins web console

Create a Jenkins Pipeline

Jenkins will makesome changes on AWS so Jenkins should have the credentials to authenticate into AWS.

Add the credentials to Jenkins

Let’s now configure the Pipeline

Use Poll SCM in this pipeline

Now create the pipeline script

We will keep on adding something and testing that.

pipeline syntax:

We are checking out SCM ( Github Repo)

Now save the pipeline and run it to test.

The next stage will be Initializing the terraform to download the necessary plugins.

Specify the directory where you want to initialize.

Save it and build

Check the Console Ouput

Again build successful

configure it once again and add these

worked really well.

Again a reminder → use t2.small for jenkins-server, t2.micro may crash while doing this part.

Have made some changes.

input file will ask you for approval to proceed or not.

Next comes the creation of EKS cluster.

We could run the terraform apply — auto-approve directly but it is good practice to have both options for apply and destroy both

Parameterize this part and you don’t have to create a separate pipeline to just to destroy the infrastructure

You have to tell the jenkins that you are using parameter in the pipeline.

Lets build now and see the console output

This comes up. choose any one.

It will ask for approval.

Now see EKS cluster is created.

check AWS webconsole

Now next stage is create a deployment and service on the created eks-cluster

In my local system I am creating a directory inside EKS/ → ConfigurationFiles where I am keeping my deployment.yaml and service.yaml files

EKS/ConfigurationFiles/deployment.yaml

This YAML file is a Kubernetes manifest for a Deployment resource.

  1. apiVersion: It specifies the version of the Kubernetes API being used. In this case, it’s “apps/v1,” which is the API version for Deployments.
  2. kind: It defines the type of Kubernetes resource being created, which is a “Deployment” in this case.
  3. metadata: This section provides metadata for the Deployment. Specifically, it sets the name of the Deployment to “nginx.”
  4. spec: This section contains the desired state of the Deployment:
  • selector: It defines how the Deployment selects which Pods it manages. In this case, it selects Pods with the label “app: nginx.”
  • replicas: It specifies the desired number of replicas (Pods) to maintain. The value is set to 1, which means one replica of the Pod will be running.
  • template: It defines the template for creating new Pods managed by this Deployment:
  • metadata: This section specifies the labels for the Pods created by the Deployment. It sets the “app” label to “nginx.”
  • spec: This is the pod specification, which includes details about the containers running in the Pod:
  • containers: It’s an array of containers running in the Pod. In this case, there is one container named “nginx.”
  • name: The name of the container is “nginx.”
  • image: It specifies the Docker image to use for this container, which is “nginx.” This means the container will run the NGINX web server.
  • ports: It defines the ports to expose on the container:
  • containerPort: It specifies that the container should listen on port 80.

In summary, this YAML file defines a Kubernetes Deployment named “nginx” that manages one replica of a Pod running an NGINX web server. The Deployment ensures that the desired state is maintained, and if the Pod were to fail, it would be automatically replaced to ensure one replica is always running. This is a basic example of how to deploy a single NGINX web server using Kubernetes.

EKS/ConfigurationFiles/service.yaml

This YAML file is a Kubernetes manifest for a Service resource. Let’s summarize what this file is doing:

  1. apiVersion: It specifies the version of the Kubernetes API being used. In this case, it’s “v1,” which is the API version for core Kubernetes resources.
  2. kind: It defines the type of Kubernetes resource being created, which is a “Service” in this case.
  3. metadata: This section provides metadata for the Service. Specifically, it sets the name of the Service to “nginx” and assigns the label “app: nginx” to it.
  4. spec: This section contains the desired configuration for the Service:
  • ports: It defines the ports that the Service will listen on:
  • name: The name of the port is “http.”
  • port: The port number exposed by the Service is 80.
  • protocol: The protocol used for the port is TCP.
  • targetPort: The targetPort is also set to 80, indicating that incoming traffic on port 80 will be forwarded to port 80 on the Pods selected by the Service.
  • selector: It specifies how the Service selects which Pods to forward traffic to. In this case, it selects Pods with the label “app: nginx,” which corresponds to the NGINX Pods created by the Deployment mentioned in the previous YAML manifest.
  • type: It specifies the type of Service. The type is set to “LoadBalancer.” This means that the Service will be exposed externally using a cloud provider’s load balancer, making it accessible from outside the Kubernetes cluster.

In summary, this YAML file defines a Kubernetes Service named “nginx” that serves as a load balancer for the NGINX Pods. It listens on port 80 and forwards incoming traffic to the NGINX Pods with the “app: nginx” label. This setup allows external access to the NGINX web server running in the cluster, making it available to clients over the network.

Next is just apply the files with kubectl

Build the pipeline now, select your choice and approve when asked.

My deployment and service got created

Its working wonderfully!!

My nodes are running !!

My Load Balancer is created and up too

Copy the DNS name and paste it in browser and check

Now It is also a better practice to not use pipeline script and keep the script in a file and push to GitHub.

Go to the Jenkins pipeline and configure it.

Rest configuration will be same as before.

Build now

Thank you so much for reading till now!!

Github

My LinkedIn Post:

https://www.linkedin.com/posts/sayantan-samanta_real-time-end-to-end-devops-project-deploying-activity-7126790039384043520-SGSR?utm_source=share&utm_medium=member_desktop

My Contact Info:

📩Email:- sayantansamanta098@gmail.com
LinkedIn:-
https://www.linkedin.com/in/sayantan-samanta/

See you soon with a new blog.

--

--