Automating AWS cloud infrastructure with Terraform

* What we will be doing?

Published in
10 min readJun 13, 2020

--

1. Create the key and security group which allows the port 80.

2. Launch EC2 instance.

3. In this EC2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html.

5. Developer has uploaded the code into GitHub repo also the repo has some images.

6. Copy the GitHub repo code into /var/www/html.

7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

* Prerequisites:-

  1. AWS Services like ec2, CloudFront, ebs, s3.
  2. Git
  3. Github

* Before moving to the solution part let me tell you about all the main terminology of amazon aws services:-

  • What is aws?

Amazon web service is a platform that offers flexible, reliable, scalable, easy-to-use, and cost-effective cloud computing solutions.

AWS is a comprehensive, easy to use computing platform offered Amazon. The platform is developed with a combination of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings.

  • 7 Best Benefits of AWS (Amazon Web Services)
  1. Comprehensive
  2. Cost-Effective
  3. Adaptable
  4. Security
  5. Innovation
  6. Global leader
  7. Improved Productivity
  • It’s services:-

EC2

Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.

CloudFront

CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS — both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2, or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.

S3

Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world.

Key Pairs and Security Group

Key pair, consisting of a private key and a public key, is a set of security credentials that you use to prove your identity when connecting to an instance. Amazon EC2 stores the public key, and you store the private key. You use the private key, instead of a password, to securely access your instances.

Security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance. … If you don’t specify a security group, Amazon EC2 uses the default security group.

  • What is terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.

The key features of Terraform are:-

  1. Execution Plans
  2. Infrastructure as Code
  3. Change Automation
  4. Resource Graph

* Solution:-

  • First of all, we are going to create a profile in aws.
$ aws configure --profile user
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:

Here, provide your username in place of user as your wish you like. Then, providing Access Key ID, Secret Access Key, Region Name, Output Format.

If you not provide Output Format then by default it will take it as json.

  • Secondly setting up the terraform locally in your system. I am using windows as my base os. Below providing the official link to download terraform.

After downloading just extract the file and copy it to some location as you wish and provide that location in your environmental variables and then run the command given below to check that terraform is successfully installed in your system.

terraform -version
  • Now directly moving to the main part of the task.
  1. Providing the provider for the terraform to work on. As we are working on aws so we providing the aws as our provider.
provider “aws” {
region = “ap-south-1”
profile = “sky”
}

Here, I provide “sky” as my profile which I have created. You should provide your created profile.

2. Now, moving to the first part of our task i.e Create the key and security group which allows the port 80.

resource “tls_private_key” “key1” {
algorithm = “RSA”
rsa_bits = 4096
}
resource “local_file” “key2” {
content = “${tls_private_key.key1.private_key_pem}”
filename = “task1_key.pem”
file_permission = 0400
}
resource “aws_key_pair” “key3” {
key_name = “task1_key”
public_key = “${tls_private_key.key1.public_key_openssh}”
}

Here, we created the key in the .pem file as while doing ssh for remote login ssh only accept key in .pem format.

Resource keyword in terraform generally needs some name while creating any aws service. Here key1, key2, key3 are just names that have great meaning while depending on some services to run after them so you can provide of your own. Also, you can mention any key_name of your choice.

The above statements should be remembered while creating further resources.

Now, creating a security group.

resource “aws_security_group” “sg” {
name = “task1-sg”
description = “Allow TLS inbound traffic”
vpc_id = “vpc-ebf8e583”
ingress {
description = “SSH”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [ “0.0.0.0/0” ]
}
ingress {
description = “HTTP”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [ “0.0.0.0/0” ]
}
egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
tags = {
Name = “task1-sg”
}
}

As we are creating a web server, in that case, we have created some inbound rules to access the webserver, allowing HTTP port and also the ssh port for remote login.

3. Launch EC2 instance. In this Ec2 instance use the key and security group which we have created above.

resource “aws_instance” “web_server” {
ami = “ami-0447a12f28fddb066”
instance_type = “t2.micro”
subnet_id = “subnet-adead0c5”
availability_zone = “ap-south-1a”
root_block_device {
volume_type = “gp2”
delete_on_termination = true
}
key_name = “${aws_key_pair.key3.key_name}”
security_groups = [ aws_security_group.sg.id ]
connection {
type = “ssh”
user = “ec2-user”
private_key = “${tls_private_key.key1.private_key_pem}”
host = “${aws_instance.web_server.public_ip}”
}
provisioner “remote-exec” {
inline = [
“sudo yum install httpd git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,
]
}
tags = {
Name = “task1_os”
}
}

All the aws keyword are used as accordingly we create the instance in aws WebUI, so there is no need for explanation.

Finally, we have used provisioned remote-exec to remotely and automatically install our required software(httpd and git)and start their service with the help of ssh using our user name and private_key.

4. Launch one Volume (EBS) and mount that volume into /var/www/html. Developer has uploaded the code into GitHub repo also the repo has some images. Copy the GitHub repo code into /var/www/html.

resource “aws_volume_attachment” “task1_ebs_mount” {
device_name = “/dev/xvds”
volume_id = “${aws_ebs_volume.task1_ebs.id}”
instance_id = “${aws_instance.web_server.id}”
force_detach = true


connection {
type = “ssh”
user = “ec2-user”
private_key = “${tls_private_key.key1.private_key_pem}”
host = “${aws_instance.web_server.public_ip}”
}
provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvds”,
“sudo mount /dev/xvds /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/Akashdeep-47/cloud_task1.git /var/www/html/”
]
}
}

After the successful creation of new volume, we have to format it and then mount to the default location for httpd server i.e /var/www/html. Here, you can use any device_name except “xvda” because usually its the primary memory name they used as they launch the instance. You can use any letter in place of “a” truly this is my personal research as I lot faced problems in that.

5. Create an S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.

resource “aws_s3_bucket” “mybucket”{
bucket = “sky25”
acl = “public-read”
provisioner “local-exec” {
command = “git clone https://github.com/Akashdeep-47/cloud_task1.git"
}

provisioner “local-exec” {
when = destroy
command = “echo y | rmdir /s cloud_task1”
}
}
resource “aws_s3_bucket_object” “file_upload” {
depends_on = [
aws_s3_bucket.mybucket,
]
bucket = “${aws_s3_bucket.mybucket.bucket}”
key = “my_pic.jpg”
source = “cloud_task1/pic.jpg”
acl =”public-read”
}

While creating the S3 bucket and also at upload file make sure you make it as public access as we are going to use these files in creating CloudFront.

As the developer pushes the code so I have used its image by cloning it to locally using provisioner “local-exec”.

6. Create a Cloudfront using s3 bucket(which contains images).

resource “aws_cloudfront_distribution” “s3_distribution” {
depends_on = [
aws_volume_attachment.task1_ebs_mount,
aws_s3_bucket_object.file_upload,
]
origin {
domain_name = “${aws_s3_bucket.mybucket.bucket}.s3.amazonaws.com”
origin_id = “ak”
}
enabled = true
is_ipv6_enabled = true
default_root_object = “index.html”
restrictions {
geo_restriction {
restriction_type = “none”
}
}
default_cache_behavior {
allowed_methods = [“HEAD”, “GET”]
cached_methods = [“HEAD”, “GET”]
forwarded_values {
query_string = false
cookies {
forward = “none”
}
}
default_ttl = 3600
max_ttl = 86400
min_ttl = 0
target_origin_id = “ak”
viewer_protocol_policy = “allow-all”
}
price_class = “PriceClass_All”viewer_certificate {
cloudfront_default_certificate = true
}
}

Here, we use depend_on keyword as we want this CloudFront to be created after the successful mounting of ebs volume and uploading the images to the s3 bucket.

7. Use the Cloudfront URL to update in code in /var/www/html.

resource “null_resource” “nullremote3” {
depends_on = [
aws_cloudfront_distribution.s3_distribution,
]
connection {
type = “ssh”
user = “ec2-user”
private_key = “${tls_private_key.key1.private_key_pem}”
host = “${aws_instance.web_server.public_ip}”
}

provisioner “remote-exec” {
inline = [
“sudo sed -i ‘s@twerk@http://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.file_upload.key}@g' /var/www/html/index.html”,
“sudo systemctl restart httpd”
]
}
}

Here, we use resource “null_resource” because for running any provisioner we need a service under which we can run and if we don't have then use the null_resource keyword.

8. Lastly, we have created a code to run our web server automatically as soon as all the services our successfully run.

resource “null_resource” “nulllocal1” {
depends_on = [
null_resource.nullremote3,
]
provisioner “local-exec” {
command = “start chrome ${aws_instance.web_server.public_ip}”
}
}

Here, we use chrome as our browser and our instances IP to launch our web server.

Please note that I have used start chrome instead of chrome as in my case it's not working even though I have passed the path for it. So choice accordingly as it works on your system.

* Some terraform commands required to run the whole configuration:-

  • terraform init ( to initialize a working directory containing Terraform configuration files )

This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times.

  • terraform validate ( to validate the code if it has some error )
  • terraform plan ( to create an execution plan )
  • terraform apply -auto-approve ( to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan )
  • terraform destroy -auto-approve ( to destroy the Terraform-managed infrastructure)

* Final output of the configuration that we had created:-

  • Key pairs
keypair
  • Security Group
Security group
  • EC2 instance
ec2 instance
  • ebs volume
ebs volume

As you can clearly see that created ebs volume had been in use which means it was successfully mounted and cloning is done.

  • S3 bucket
S3 bucket
  • CloudFront
CloudFront
  • Final outcome.
Output i.e webserver

Finally, we have achieved what we want to create.

Thanks for reading this article. I have tried to explain as much as I can. Feel free to provide suggestions.

Github Repository for the help in the code:-

--

--