100 Days of AWS — Day 19- Backup Solution using S3, Glacier and VPC Endpoint

Prashant Lakhera
4 min readApr 23, 2022

--

To view the complete course, please enroll it using the below link(it’s free)

https://www.101daysofdevops.com/courses/100-days-of-aws/

Welcome to Day 19 of 100 Days of AWS. The topic for today is Backup Solution using S3, Glacier and VPC Endpoint.

This is the simple solution we are trying to build where if your EC2 instance exists in a private subnet and you want to push data to the S3 bucket and eventually to Glacier(after 30 or 60 days, depending upon your requirement)

These are the steps you need to follow

  1. Create an IAM Role

https://us-east-1.console.aws.amazon.com/iamv2 → Roles → Create role

Select an AWS service and under Common use cases select EC2. Click on Next

  • Under Permissions policies search for AmazonS3FullAccess and click on Next
  • Give your role some meaningful name and click on Create role
  • Choose the IAM Role, you have create in previous step and click on Save.

2. Create a VPC endpoint

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

  • Give your endpoint some name and under Services search for S3 and select the VPC. Keep all the other setting default and click on Create endpoint.

3. S3 bucket lifecycle rule: Using S3 Lifecycle configuration you can define rules for Amazon S3 bucket to transition objects to another Amazon S3 storage class.

  • Give Lifecycle rule some name. Click on Move current versions of objects between storage classes and under Choose storage class transitions select Standard-IA and Days after object creation to 30 and Glacier Deep Archive to 90. This rule will move the object from S3 standard class to Standard-IA after 30 days and Glacier Deep Archive after 90 days.

4. Now login to EC2 instance and try to copy the file to S3 bucket. As we already setup the IAM role, we don’t need to hardcode the value of AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY.

# aws s3 cp /var/log/messages s3://plakhera-test-sts-bucketupload: ../var/log/messages to s3://plakhera-test-sts-bucket/messages

NOTE: In case aws cli is present, install it using pip install awscli

  • Now I am going to write a simple script which is going to sync data from your local folder to s3 bucket every minute
# cat /usr/bin/awss3sync.sh
#!/bin/bash
aws s3 sync /var/log/. s3://plakhera-test-sts-bucket
  • Put that script in crontab so that it will execute every min
[root@ip-172-31-31-68 bin]# crontab -l*/1 * * * * /usr/bin/awss3sync.sh
  • Dont forget to change the permission of the script
# chmod +x /usr/bin/awss3sync.sh
  • Your simple backup solution is ready, it’s not a perfect solution but it’s easy to implement and will perform the given task.

--

--

Prashant Lakhera

AWS Community Builder, Ex-Redhat, Author, Blogger, YouTuber, RHCA, RHCDS, RHCE, Docker Certified,4XAWS, CCNA, MCP, Certified Jenkins, Terraform Certified, 1XGCP