📌CI/CD Pipeline for EKS using CodeCommit, CodeBuild, CodePipeline, and Elastic Container Registry(ECR)📌

Prashant Lakhera
8 min readNov 28, 2022

--

👷‍♂️ Over the long weekend, I have decided to build a simple CI/CD pipeline so that all the changes I am pushing to my test EKS cluster should be done via an automated way rather than me pushing all the changes manually.

🔨 AWS Services used

1️⃣ CodeCommit: Simple way of think of CodeCommit is the AWS equivalent of GitHub to host your private Git repositories.

2️⃣ CodeBuild: CodeBuild is like your build server, which can compile your source code and produce artifacts. In this case, I use CodeBuild to create a docker image, push it to AWS Elastic Container Registry(ECR) and then deploy the image to the Kubernetes cluster. Generally, for deployment tasks, AWS has another service CodeDeploy, but currently, it doesn’t support EKS.

3️⃣ Elastic Container Registry(ECR): AWS ECR is equivalent to dockerhub, where you can store your docker images.

4️⃣ CodePipeline: CodePipeline is the AWS equivalent of Jenkins, where you build a pipeline including various stages.

🚨 While testing I hit the dockerhub rate limit some time(as I am not logged in to dockerub), so I switched to AWS ECR public repository https://docs.aws.amazon.com/AmazonECR/latest/public/docker-pull-ecr-image.html

✅ So my workflow is pretty simple, push the changes to CodeCommit, which will trigger the CodeBuild. CodeBuild will build the docker image and push it to ECR. From ECR, kubelet will pick the latest image and deploy it to the EKS cluster.

  1. Create an ECR repository to store the docker image
aws ecr create-repository --repository-name  my-demo-repo --image-tag-mutability IMMUTABLE --image-scanning-configuration scanOnPush=true

2. Create a CodeCommit repository

aws codecommit create-repository --repository-name  mydemorepo

3. Create a git credential for IAM service

aws iam create-service-specific-credential --user-name plakhera --service-name codecommit.amazonaws.com

Please make a note of these credentials. If you want to do it via UI, check the following doc https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html

4. Once the credentials is created, clone the git repo created in step 2

 git clone https://git-codecommit.us-west-2.amazonaws.com/v1/repos/mydemorepo
Cloning into 'mydemorepo'...
Username for 'https://git-codecommit.us-west-2.amazonaws.com': plakhera-at-892515485494
Password for 'https://plakhera-at-892515485494@git-codecommit.us-west-2.amazonaws.com':
warning: You appear to have cloned an empty repository

NOTE: The username for codecommit is different from the IAM username. Please pay special attention to that.

5. Copy all the files to this application and Kubernetes manifests to this directory.

> tree
.
├── Dockerfile
├── app
│ └── index.html
└── manifests
└── deployment.yaml

Dockerfile

FROM nginx
COPY app /usr/share/nginx/html/app

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-eks-pipeline-deployment
labels:
app: my-eks-pipeline-deployment
spec:
replicas: 1
selector:
matchLabels:
app: my-eks-pipeline-deployment
template:
metadata:
labels:
app: my-eks-pipeline-deployment
spec:
containers:
- name: my-eks-pipeline-deployment
image: CONTAINER_IMAGE
ports:
- containerPort: 80

index.html

<!DOCTYPE html>
<html>
<h1>Welcome to Pipeline for EKS using CodeCommit, CodeBuild and CodePipeline </h1>
<h3> This is demo pipeline for EKS - v1</h3>
</body>
</html>

6. Create an STS assume role for CodeBuild to have permission to interact with AWS EKS. We will create an IAM role CodeBuildEKSRole and add an inline policy EKS:Describe that CodeBuild will use to interact with the EKS cluster via kubectl.

# Export your AWS Account(To get your aws account id run the following command aws sts get-caller-identity --query Account --output text)
export ACCOUNT_ID=<aws account id>

# Set the Trust Policy
TRUST="{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::${ACCOUNT_ID}:root\" }, \"Action\": \"sts:AssumeRole\" } ] }"

# Create IAM Role for CodeBuild to Interact with EKS
aws iam create-role --role-name CodeBuildEKSRole --assume-role-policy-document "$TRUST" --output text --query 'Role.Arn'

# Create an Inline Policy with eks:Describe permission and redirect the output to eksdescribe.json
echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "eks:Describe*", "Resource": "*" } ] }' > /tmp/eksdescribe.json

# Add this Inline Policy to the IAM Role CodeBuildEKSRole
aws iam put-role-policy --role-name CodeBuildEKSRole --policy-name eks-describe-policy --policy-document file:///tmp/eksdescribe.json

7. Next step is to add the newly created IAM role(CodeBuildEKSRole) to the aws-auth configmap of the EKS cluster.

# Check the aws-auth configmap 
kubectl get configmap aws-auth -o yaml -n kube-system

# Export your AWS Account(To get your aws account id run the following command aws sts get-caller-identity --query Account --output text)
export ACCOUNT_ID=<aws account id>

# Set the ROLE value
ROLE=" - rolearn: arn:aws:iam::$ACCOUNT_ID:role/CodeBuildEKSRole\n username: build\n groups:\n - system:masters"

# Get the current aws-auth configMap data and add new role to it
kubectl get -n kube-system configmap/aws-auth -o yaml | awk "/mapRoles: \|/{print;print \"$ROLE\";next}1" > /tmp/auth-patch.yml

# Patch the aws-auth configmap with new role
kubectl patch configmap/aws-auth -n kube-system --patch "$(cat /tmp/auth-patch.yml)"

8. Next step is to create the buildspec.yml for CodeBuild. There are a lot of examples available on the internet. I refer to some of these and modify them based on my requirement https://github.com/aquasecurity/amazon-eks-devsecops/blob/master/buildspec.yml

version: 0.2
phases:
install:
commands:
- echo "Install Phase - if you need additional package, add it in this stage"
pre_build:
commands:
# This Docker Image tag will have date, time and Codecommit version
- TAG="$(date +%Y-%m-%d.%H.%M.%S).$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)"
# Updating Docker Image tag in your Kubernetes Deployment Manifest
- echo "Update Image tag in kubernetes manifest"
- sed -i 's@CONTAINER_IMAGE@'"$REPOSITORY_URI:$TAG"'@' manifests/deployment.yaml
# Check AWS CLI Version
- echo "Checking AWS CLI Version..."
- aws --version
# Login to ECR Registry
- echo "Login in to Amazon ECR Registry"
- $(aws ecr get-login --no-include-email)
# Update Kube config Home Directory
- export KUBECONFIG=$HOME/.kube/config
build:
commands:
# Building Docker Image
- echo "Docker build started on `date`"
- echo "Building the Docker image..."
- docker build --tag $REPOSITORY_URL:$TAG .
post_build:
commands:
# Push Docker Image to ECR Repository
- echo "Docker build completed on `date`"
- echo "Pushing the Docker image to ECR Repository"
- docker push $REPOSITORY_URI:$TAG
- echo "Docker Push to ECR Repository Completed - $REPOSITORY_URI:$TAG"
# Get AWS Credential using STS Assume Role for kubectl
- echo "Setting Environment Variables related to AWS CLI for Kube Config Setup"
- CREDENTIALS=$(aws sts assume-role --role-arn $EKS_ROLE_ARN --role-session-name eks-codebuild --duration-seconds 900)
- export AWS_ACCESS_KEY_ID="$(echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId')"
- export AWS_SECRET_ACCESS_KEY="$(echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey')"
- export AWS_SESSION_TOKEN="$(echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken')"
- export AWS_EXPIRATION=$(echo ${CREDENTIALS} | jq -r '.Credentials.Expiration')
# Updating kubectl with your EKS Cluster
- echo "Update Kube Config configuration"
- aws eks update-kubeconfig --name $EKS_CLUSTERNAME
# Show time, applying manifests changes using kubectl
- echo "Apply changes to kube manifests"
- kubectl apply -f manifests/
- echo "All done!!!! Kubernetes changes applied"
# Create Artifacts which we can use if we want to continue our pipeline for other stages
- printf '[{"name":"deployment.yaml","imageUri":"%s"}]' $REPOSITORY_URI:$TAG > build.json
artifacts:
files:
- build.json
- manifests/*

In order for this buildspec to work you need to add some environment variable

EKS_CLUSTERNAME=<your eks cluster name>
EKS_ROLE_ARN=<IAM Role create in Step 6>
REPOSITORY_URL=<ECR repository created in step 1>

9. Go to the CodePipeline url page and click on Create pipeline

  • Give your pipeline name(my-eks-pipeline), and leave the service role field. It should be auto-populated. Leave the default setting in the rest of the field and click Next.
  • Under Source provider, choose AWS CodeCommit and choose the repository we created in Step 2. Leave the rest of the settings as default.
  • Under the build stage, choose AWS CodeBuild as the build provider, and under the Project name, click on Create project.
  • Give your project name and under a Managed image, select Amazon Linux 2
  • Choose runtime as Standard, Image(from the drag down, choose the latest), and keep the rest of the settings as default.
  • If your buildspec.yml exists at the Git repository’s root, you don’t need to specify it here. Specify the details if you want to send logs to CloudWatch or S3. Click on Continue to CodePipeline at the bottom of the screen.
  • Before moving to the next step, you need to add a few environment variables, as mentioned above.
  • Under the deploy stage, click on Skip deploy stage. As deploy doesn’t support EKS, we already specified the deployed step in the build stage.
  • Review your pipeline configuration and click on Create pipeline.
  • The first run of the pipeline will fail as CodeBuild doesn’t have permission to update to the EKS cluster.
  • To fix this error, go to the IAM console, Policies, and click on Create policy.
  • In the policy, specify the below policy and the IAM role we created in Step 6.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::XXXXXXX:role/CodeBuildEKSRole"
}
]
}
  • Give your policy some name and click on Create policy
  • Go back to the IAM role for which the assume role is failing, click on attach and attach the policy we have created in previous step.
  • Also, this role doesn’t have permission to push newly created docker images to the EC2 repo, so attach one more policy.
  • After attaching all these policies, either commit a new change or go to the pipeline and, at the top, click on Release change
  • This time all looks good to go.

GitHub Code: https://github.com/100daysofdevops/100daysofdevops/tree/master/eks-codepipeline

--

--

Prashant Lakhera

AWS Community Builder, Ex-Redhat, Author, Blogger, YouTuber, RHCA, RHCDS, RHCE, Docker Certified,4XAWS, CCNA, MCP, Certified Jenkins, Terraform Certified, 1XGCP