Deployment of Webapp over AWS-EKS cluster
AWS-EKS
Amazon EKS (Elastic Container Service for Kubernetes) — Allows you to use Kubernetes on AWS without installing and managing your own Kubernetes control plane.
EKS itself manages the master node privately inside its own infrastructure. Only we have to worry about our slave environment( like instance-type).
Here, we can launch various instance-types of the slave environment also knowns as nodeGroups, and inside this nodeGroups, we can launch as many nodes of various instance-type.
Benefits of using AWS-EKS
- Easy to use.
- Flexible.
- Cost-effective.
- Reliable.
- Scalable and high-performance.
- Secure.
Aim:-
To create an EKS cluster and inside that cluster launching three different instance-type nodes and after that launching two pods inside that nodes launched, one of Mysql for database and other of Joomla(web app). For balancing our app, we will be connecting this app pod with the Load Balancer provided by the ELB service of AWS and for storage of both pods, we will be using EFS storage of AWS.
So, let's start creating our whole environment….
Creating an IAM Role in AWS.
1. Open IAM console.
2. Choose Roles > Create Role.
3. Select EKS from a list of services > under EKS choose the use case as Allows EKS to manage clusters on your behalf.
4. Then give some tag and review it.
Creating Amazon EKS cluster
We can create EKS in AWS in two ways:-
- Using aws eks create-cluster command: This command does not provide more customization for creating cluster(slave)with different instance-type.
- Using eksctl command: We are going to use this command for creating our cluster as it provides more customization while creating a slave environment.
Secondly, we will be going to need one software that will contact the master(Kubernetes cluster).
So, download the eksctl and kubectl software, easily available over google and save in one folder and provide that folder location to your environment variables.
For, creating the cluster we have to create a file in yml code giving cluster name, region(where we have launch our cluster), providing nodeGoups with different instance-type and capacity and also in each node-group providing one aws-key to further login purpose.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfigmetadata:
name: ak-cluster
region: ap-south-1nodeGroups:
— name: ng1
desiredCapacity: 2
instanceType: t2.micro
ssh:
publicKeyName: key1
— name: ng2
desiredCapacity: 1
instanceType: t2.small
ssh:
publicKeyName: key1
Command for creating cluster →
eksctl create cluster -f cluster.yml
Here, cluster.yml is the file name of the code I created above.
We can also check the cluster and nodeGroups created by following commands:-
eksctl get cluster
eksctl get nodegroup --cluster ak-cluster
ak-cluster is the cluster name that you have mentioned in the code.
A view from WEBUI of create cluster and nodeGroup.
Now, connecting this cluster to kubectl for this we have to update our kubeconfig file →
aws eks update-kubeconfig --name ak-cluster
Creating EFS for storing data
After creating the cluster VPC is automatically created for that cluster and using that cluster we can create our EFS.
Also, to use the same security-group that had been created by the cluster.
For connecting our EFS storage to our pods, we have to install amazon-efs-utils in all the slave nodes manually.
yum install amazon-efs-utils -y
Note:-
It is recommended to use the separate namespace to launching our different projects either to run in the default namespace.
kubectl create namespace ak-eks
Updating it to default.
kubectl config set-context --current --namespace=ak-eks
Creating EFS Provisioner
apiVersion: apps/v1
kind: Deployment
metadata:
name: efs-provisioner
spec:
selector:
matchLabels:
app: efs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
— name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:v0.1.0
env:
— name: FILE_SYSTEM_ID
value: fs-3f55dfee
— name: AWS_REGION
value: ap-south-1
— name: PROVISIONER_NAME
value: ak/aws-efs
volumeMounts:
— name: pv-volume
mountPath: /persistentvolumes
volumes:
— name: pv-volume
nfs:
server: fs-3f55dfee.efs.ap-south-1.amazonaws.com
path: /
Replace the FILE_SYSTEM_ID with the id of your efs created and also the server. This provisioner will help us our kubectl to connect to the EFS for storage purposes.
Modifying RBAC
Modifying some permissions using Role-Based Access Control(RBAC).
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nfs-provisioner-role-binding
subjects:
— kind: ServiceAccount
name: default
namespace: ak-eks
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
Creating Storage Class, and PVC
We will be creating a Storage class which will create the PVC required for our pod for storing data. A dynamic PV will be created that will contact to storage class for the required storage.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: aws-efs
provisioner: ak/aws-efs
— -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: wordpress-efs
annotations:
volume.beta.kubernetes.io/storage-class: “aws-efs”
spec:
accessModes:
— ReadWriteMany
resources:
requests:
storage: 10Gi
— -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-efs
annotations:
volume.beta.kubernetes.io/storage-class: “aws-efs”
spec:
accessModes:
— ReadWriteMany
resources:
requests:
storage: 10Gi
Creating Secret Box
I have created one secret box for storing our database root password that I will not reveal but I will tell how to create that secret box for storing your password and other important credentials.
apiVersion: v1
kind: Secret
metadata:
name: mysecure
data:
password: xxyyzz
Deployment of Mysql (database pod)
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
— port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
— -
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
— image: mysql:5.6
name: mysql
env:
— name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
ports:
— containerPort: 3306
name: mysql
volumeMounts:
— name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
— name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-efs
Deployment of WordPress(Webapp)
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
— port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
— -
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
— image: wordpress:4.8-apache
name: joomla
env:
— name: WORDPRESS_DB_HOST
value: wordpress-mysql
— name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
ports:
— containerPort: 80
name: wordpress
volumeMounts:
— name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
— name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wordpress-efs
- Using AWS ELB service for load balancer as service-type for our web app to connect to the outside world.
For running all above code you can use:-
kubectl create -f 'file_name'
Now use the following command to check all the files had been created successfully:-
kubectl get all
- Using AWS Load balancer DNS hostname, we can see our deployed webapp(WordPress).
- Finally, our has been deployed in aws_eks cluster…
THANKS FOR READING !!!
A warm welcome to all suggestions and claps.
Github repository for further help in the code:-