WordPress Deployment on Kubernetes Using HELM Charts: A Step-by-Step Guide
Introduction:
In this article, we simplify the process by introducing HELM Charts, a powerful package manager for Kubernetes that automates the deployment and management of applications. You’ll learn how to effortlessly deploy WordPress on a Kubernetes cluster using HELM, streamline your infrastructure, and leverage Kubernetes’ powerful orchestration capabilities. Whether you’re a DevOps engineer or a developer looking to improve your deployment workflows, this guide will provide you with the tools and knowledge to efficiently deploy WordPress with ease and confidence.
In this article, I will demonstrate the deployment of WordPress on an Amazon EKS (Elastic Kubernetes Service) cluster hosted on AWS Cloud. The deployment will follow these detailed steps:
Step 1: We’ll use an AWS CloudFormation template to create essential infrastructure components such as VPCs, Subnets, Route Tables, and Security Groups.
Step 2: Next, we’ll set up the required IAM Roles for our EKS cluster.
Step 3: In this step, we’ll manually create the EKS Cluster in the VPC we provisioned earlier using AWS CloudFormation.
Step 4: We’ll then create a Node Group within the EKS cluster to ensure we have compute resources available.
Step 5: On our local machine, we’ll install the necessary tools — AWS CLI, kubectl, and HELM.
Step 6: We’ll configure access to our EKS cluster from the local terminal using the tools installed in the previous step.
Step 7: In this step, we’ll explore how to add and install the Bitnami WordPress package from artifacthub.io to our local environment. We’ll also cover how to download the package if needed.
Step 8: We’ll install the AWS EBS CSI Driver to handle persistent storage for WordPress.
Step 9: We’ll create a Storage Class and use HELM charts to deploy WordPress onto the EKS cluster.
Step 10: Finally, we’ll access the deployed WordPress site.
By following these steps, you’ll learn how to efficiently deploy a highly available WordPress instance using Kubernetes and HELM on AWS, while leveraging cloud-native services and best practices.
Step01: Create the VPC with public and private subnets using the CloudFormation template
---
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Amazon EKS Sample VPC - Private and Public subnets'
Parameters:
VpcBlock:
Type: String
Default: 192.168.0.0/16
Description: The CIDR range for the VPC. This should be a valid private (RFC 1918) CIDR range.
PublicSubnet01Block:
Type: String
Default: 192.168.0.0/18
Description: CidrBlock for public subnet 01 within the VPC
PublicSubnet02Block:
Type: String
Default: 192.168.64.0/18
Description: CidrBlock for public subnet 02 within the VPC
PrivateSubnet01Block:
Type: String
Default: 192.168.128.0/18
Description: CidrBlock for private subnet 01 within the VPC
PrivateSubnet02Block:
Type: String
Default: 192.168.192.0/18
Description: CidrBlock for private subnet 02 within the VPC
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
-
Label:
default: "Worker Network Configuration"
Parameters:
- VpcBlock
- PublicSubnet01Block
- PublicSubnet02Block
- PrivateSubnet01Block
- PrivateSubnet02Block
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: !Ref VpcBlock
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-VPC'
InternetGateway:
Type: "AWS::EC2::InternetGateway"
VPCGatewayAttachment:
Type: "AWS::EC2::VPCGatewayAttachment"
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref VPC
PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: Public Subnets
- Key: Network
Value: Public
PrivateRouteTable01:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: Private Subnet AZ1
- Key: Network
Value: Private01
PrivateRouteTable02:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: Private Subnet AZ2
- Key: Network
Value: Private02
PublicRoute:
DependsOn: VPCGatewayAttachment
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
PrivateRoute01:
DependsOn:
- VPCGatewayAttachment
- NatGateway01
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PrivateRouteTable01
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId: !Ref NatGateway01
PrivateRoute02:
DependsOn:
- VPCGatewayAttachment
- NatGateway01
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PrivateRouteTable02
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId: !Ref NatGateway01
NatGateway01:
DependsOn:
- NatGatewayEIP1
- PublicSubnet01
- VPCGatewayAttachment
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt 'NatGatewayEIP1.AllocationId'
SubnetId: !Ref PublicSubnet01
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-NatGatewayAZ1'
NatGatewayEIP1:
DependsOn:
- VPCGatewayAttachment
Type: 'AWS::EC2::EIP'
Properties:
Domain: vpc
PublicSubnet01:
Type: AWS::EC2::Subnet
Metadata:
Comment: Subnet 01
Properties:
MapPublicIpOnLaunch: true
AvailabilityZone:
Fn::Select:
- '0'
- Fn::GetAZs:
Ref: AWS::Region
CidrBlock:
Ref: PublicSubnet01Block
VpcId:
Ref: VPC
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PublicSubnet01"
- Key: kubernetes.io/role/elb
Value: 1
PublicSubnet02:
Type: AWS::EC2::Subnet
Metadata:
Comment: Subnet 02
Properties:
MapPublicIpOnLaunch: true
AvailabilityZone:
Fn::Select:
- '1'
- Fn::GetAZs:
Ref: AWS::Region
CidrBlock:
Ref: PublicSubnet02Block
VpcId:
Ref: VPC
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PublicSubnet02"
- Key: kubernetes.io/role/elb
Value: 1
PrivateSubnet01:
Type: AWS::EC2::Subnet
Metadata:
Comment: Subnet 03
Properties:
AvailabilityZone:
Fn::Select:
- '0'
- Fn::GetAZs:
Ref: AWS::Region
CidrBlock:
Ref: PrivateSubnet01Block
VpcId:
Ref: VPC
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PrivateSubnet01"
- Key: kubernetes.io/role/internal-elb
Value: 1
PrivateSubnet02:
Type: AWS::EC2::Subnet
Metadata:
Comment: Private Subnet 02
Properties:
AvailabilityZone:
Fn::Select:
- '1'
- Fn::GetAZs:
Ref: AWS::Region
CidrBlock:
Ref: PrivateSubnet02Block
VpcId:
Ref: VPC
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-PrivateSubnet02"
- Key: kubernetes.io/role/internal-elb
Value: 1
PublicSubnet01RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet01
RouteTableId: !Ref PublicRouteTable
PublicSubnet02RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet02
RouteTableId: !Ref PublicRouteTable
PrivateSubnet01RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PrivateSubnet01
RouteTableId: !Ref PrivateRouteTable01
PrivateSubnet02RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PrivateSubnet02
RouteTableId: !Ref PrivateRouteTable02
ControlPlaneSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster communication with worker nodes
VpcId: !Ref VPC
Outputs:
SubnetIds:
Description: Subnets IDs in the VPC
Value: !Join [ ",", [ !Ref PublicSubnet01, !Ref PublicSubnet02, !Ref PrivateSubnet01, !Ref PrivateSubnet02 ] ]
SecurityGroups:
Description: Security group for the cluster control plane communication with worker nodes
Value: !Join [ ",", [ !Ref ControlPlaneSecurityGroup ] ]
VpcId:
Description: The VPC Id
Value: !Ref VPC
Step02: Now we will create an AWS IAM Role
- EKSserviceRole with “AmazonEKSClusterPolicy” policy
- eksworkernoderole
Step03: Now we will create the EKS cluster using AWS Console in the newly created AWS VPC (in step01)
It takes approximately 10 to 15 minutes to create this cluster …
Step04: Creating the Node Group on the AWS EKS cluster
Click on “Review and Create”
Step05: Installation of “Kubectl”, “aws cli” and “helm” locally
- Kubectl installation: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
- AWS CLI installation: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
- HELM installation: https://helm.sh/docs/intro/install/
Step06: Access the cluster locally using the terminal
- Login to AWS CLI using access key id and secret access key with “aws configure” command
- Execute the command to access the cluster
aws eks --region us-east-1 update-kubeconfig --name dev-eks-cluster
- Now execute the command to check the number of nodes
kubectl get nodes
- There are 2 nodes available
Step07: Explore how to add and install the Bitnami WordPress package from artifacthub.io:
*I have download the package locally and then execute the following commands
Step08: Installation of “Amazon EBS CSI Driver”
On AWS EKS cluster -> goto “add-ons” -> search “Amazon EBS CSI Driver” -> click on install
Step09: Create the “storageclass” with provisioner “kubernetes.io/aws-ebs” and execute the “helm” install to deploy WordPress
Now we will create the “storageclass” ebs volume
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
encrypted: "true"
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
Apply the storageclass.yaml file:
Get the list of storageclasses:
Now we have to update this “Storage class” as “Default class”
kubectl patch storageclass ebs-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Here is the WordPress HELM chart:
$ helm install wordpress .
Now the “helm install wordpress” deployment process is completed and you should have the following output:
Important commands to get the “WordPress URL” and “WordPress Admin URL”
Use the above commands to get the URLs and Password of the WordPress Admin dashboard
Here you can see that the both pvc “wordpress” and “mariadb” status is Bound
On AWS EBS Volume we have 2 new volumes:
Step10: Access the WordPress Admin Dashboard:
We have logged in successfully:
Created a new Post on the WordPress dashboard
Publish and access it using site URL
Now if we delete both of these “mysql” and “wordpress” pods they will back automatically because of “deployment” and “statefulsets” configurations
Still, we can access our WordPress posts due to persistent volume claims.
Clear your cluster commands:
- list the Helm charts
- delete chart
Conclusion
Deploying WordPress on a Kubernetes cluster using HELM charts on AWS EKS showcases the power and flexibility of containerized applications in a cloud-native environment. Through this step-by-step guide, we have successfully leveraged AWS services like CloudFormation, EKS, IAM, and EBS CSI drivers to build a scalable and resilient WordPress deployment.
By utilizing HELM charts, we’ve streamlined the deployment process, making it easier to manage complex configurations and ensuring smooth upgrades and rollbacks. This approach not only simplifies application deployment but also provides the reliability and scalability necessary for modern web applications.
Whether you’re managing a large-scale production environment or setting up a personal blog, this method allows you to take full advantage of Kubernetes and AWS, ensuring your WordPress site is both efficient and highly available. With the knowledge gained from this guide, you’re now equipped to explore more advanced Kubernetes features and further optimize your deployments.
I hope you enjoyed reading this article, please feel free to contact me Syedusmanahmad if you have any questions.
Please feel free to write @ engr.syedusmanahmad@gmail.com | Linkedin https://www.linkedin.com/in/engrusman-ahmad for any queries on AWS/DevOps & stay tuned for the next write-up.
If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇