What Will You Do¶
For this exercise,
- You will use the Workload Wizard to configure and deploy a containerized application to a Kubernetes cluster
- You will use a container image from Docker Hub called Progrium that is specifically designed to automatically increase its resource demand and will trigger "Horizontal Pod Autoscaling" thresholds.
- You will configure autoscaling thresholds for your container and watch additional pods being added once the thresholds are met.
Watch a 2 Min Demo Video for this exercise.
You have already provisioned or imported one or more Kubernetes clusters using the Controller.
Your clusters have the ability to connect to Docker Hub and pull images.
It is very common for applications to have variable loads throughout the day. A typical application will have periods of low and high utilization.
For example, a SaaS application is likely to have very high utilization from 9am to 5pm everyday and very low utilization at nights.
Kubernetes can be configured to automatically adjust the number of pods to handle increased demand.
We have configured the Kubernetes Horizontal Pod Autoscaler-HPA to automatically scale the number of pods based on observed pod metrics.
Auto scaling can be performed based on "Demand" configured for the container.
Step 1: Create Workload¶
- Login into the web Console
- Click on "New Workload", provide a name and disable "Inbound Traffic"
- On the "Container" configuration tab, click on "New Container"
- Enter a "name" for the service
- Select "Public Docker Hub Registry"
- Enter "progrium/stress:latest" for the repository and tag
- Enter the arguments for the Container Image as in the screenshot below
- Select "micro" for the container size
- In Container Configuration, select "Autoscaling"
- Select "3" for Maximum Replicas
- Enter "30%" for the CPU Threshold
- Save Container Config and Go To Policies
With this configuration, we are configuring the Kubernetes HPA to "add more pods" once the CPU utilization of the first pod hits the configured threshold of "30%". We are also setting a "ceiling" for the maximum number of replicas to prevent an uncontrolled runaway autoscaling issues.
- Select a cluster location where you would like to deploy your workload
- Save and Go To Publish
Step 2: Publish Workload¶
Once the Controller completes validation of your workload configuration,
- Click on Publish.
- Once the process starts, click on Debug
In a few seconds, you should see one pod for "Progrium/stress" start running.
Step 3: Monitor Autoscaling¶
The "Progrium/stress" pod will gradually increase its resource demand. Once the "30%" CPU utilization threshold is breached, additional pods will be automatically added.
As you can see from the screenshot below, two more pods have been added.
Congratulations! You have successfully configured and tested horizontal pod autoscaling (HPA) on Kubernetes using the Workload Wizard.