100 Days of AWS — Day 19- Backup Solution using S3, Glacier and VPC Endpoint
To view the complete course, please enroll it using the below link(it’s free)
https://www.101daysofdevops.com/courses/100-days-of-aws/
Welcome to Day 19 of 100 Days of AWS. The topic for today is Backup Solution using S3, Glacier and VPC Endpoint.
This is the simple solution we are trying to build where if your EC2 instance exists in a private subnet and you want to push data to the S3 bucket and eventually to Glacier(after 30 or 60 days, depending upon your requirement)
These are the steps you need to follow
- Create an IAM Role
https://us-east-1.console.aws.amazon.com/iamv2 → Roles → Create role
Select an AWS service and under Common use cases select EC2. Click on Next
- Under Permissions policies search for AmazonS3FullAccess and click on Next
- Give your role some meaningful name and click on Create role
- Go to EC2 console https://us-west-2.console.aws.amazon.com/ec2 , under Actions, select Security and Modify IAM role
- Choose the IAM Role, you have create in previous step and click on Save.
2. Create a VPC endpoint
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.
- Go to the VPC console https://us-west-2.console.aws.amazon.com/vpc . Click on Endpoints and Create endpoint
- Give your endpoint some name and under Services search for S3 and select the VPC. Keep all the other setting default and click on Create endpoint.
3. S3 bucket lifecycle rule: Using S3 Lifecycle configuration you can define rules for Amazon S3 bucket to transition objects to another Amazon S3 storage class.
- Go to S3 console https://s3.console.aws.amazon.com/s3/ , click on Management and click on Create lifecycle rule
- Give Lifecycle rule some name. Click on Move current versions of objects between storage classes and under Choose storage class transitions select Standard-IA and Days after object creation to 30 and Glacier Deep Archive to 90. This rule will move the object from S3 standard class to Standard-IA after 30 days and Glacier Deep Archive after 90 days.
4. Now login to EC2 instance and try to copy the file to S3 bucket. As we already setup the IAM role, we don’t need to hardcode the value of AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY.
# aws s3 cp /var/log/messages s3://plakhera-test-sts-bucketupload: ../var/log/messages to s3://plakhera-test-sts-bucket/messages
NOTE: In case aws cli is present, install it using pip install awscli
- Now I am going to write a simple script which is going to sync data from your local folder to s3 bucket every minute
# cat /usr/bin/awss3sync.sh
#!/bin/bash
aws s3 sync /var/log/. s3://plakhera-test-sts-bucket
- Put that script in crontab so that it will execute every min
[root@ip-172-31-31-68 bin]# crontab -l*/1 * * * * /usr/bin/awss3sync.sh
- Dont forget to change the permission of the script
# chmod +x /usr/bin/awss3sync.sh
- Your simple backup solution is ready, it’s not a perfect solution but it’s easy to implement and will perform the given task.