Day-2
This is Part 2 of a multi-part, self-paced quick start exercise that will focus on day-two operations on your newly created cluster in your PaaS environment in Azure using Terraform.
What Will You Do¶
In part 2, you will:
- Create a new nodegroup
- Upgrade the K8s version of your K8s cluster and nodegroups
Step 1: Configure & Provision a Nodegroup¶
- Edit the
terraform.tfvars
file. The file location is terraform/pas_terraform/aks/terraform.tfvars. We will add a new nodegroup to the nodePools section. The NodePools section should look like the following once the new nodegroup has been added.
nodePools = {
"pool1" = {
name = "pool1"
location = "centralindia"
count = 1
maxCount = 3
minCount = 1
mode = "System"
k8sVersion = "1.26.10"
vmSize = "Standard_DS2_v2"
}
"pool2" = {
name = "pool2"
location = "centralindia"
count = 1
maxCount = 3
minCount = 1
mode = "user"
k8sVersion = "1.26.10"
vmSize = "Standard_DS2_v2"
}
}
- Open the terminal or command line.
- Navigate to the
terraform/pas_terraform/aks
folder. - Run
terraform apply
. Enteryes
when prompted. - The nodegroup will be added to cluster and will be available within 10 minutes.
Step 2: Verify Nodegroup Provisioning¶
Once provisioning of the nodegroup is complete, you should have a new nodegroup with an additional node.
- Navigate to the node pools tab for the cluster. You should see a newly added node pool running in the cluster.
Step 3: Perform Cluster Upgrade¶
We will now upgrade the K8s version of the control plane and nodes to a later release.
- Edit the
terraform.tfvars
file. The file location is terraform/pas_terraform/aks/terraform.tfvars. We will update the k8s version for the control plane and nodegroups to a later release. For this exercise we started at 1.26 and are upgrading to 1.27.
# Poject name variable
project = "pas-gs-changeme-aks"
# Cloud Credentials specific variables
cloud_credentials_name = "rafay-cloud-credential"
# Specify Role ARN & externalid info below for AKS.
subscription_id = ""
tenant_id = ""
client_id = ""
client_secret = ""
# Cluster variables ()
cluster_name = "cluster-name-changeme"
# Cluster Location (centralindia)
cluster_location = "centralindia"
# Cluster Resource Group
cluster_resource_group = "dreta-private"
# K8S Version
k8s_version = "1.27.3"
# Nodepool sepcific variables
nodePools = {
"pool1" = {
name = "pool1"
location = "centralindia"
count = 1
maxCount = 3
minCount = 1
mode = "System"
k8sVersion = "1.27.3"
vmSize = "Standard_DS2_v2"
},
"pool2" = {
name = "pool2"
location = "centralindia"
count = 1
maxCount = 3
minCount = 1
mode = "user"
k8sVersion = "1.27.3"
vmSize = "Standard_DS2_v2"
}
}
# TAGS
cluster_tags = {
"email" = "email@rafay.co"
"env" = "dev"
"orchestrator" = "rafay"
}
node_tags = {
"env" = "dev"
}
node_labels = {
"app" = "infra"
"dedicated" = "true"
}
# Blueprint/Addons specific variables
blueprint_name = "custom-blueprint"
blueprint_version = "v0"
base_blueprint = "minimal"
base_blueprint_version = "2.2.0"
namespaces = ["ingress-nginx", "cert-manager"]
infra_addons = {
"addon1" = {
name = "cert-manager"
namespace = "cert-manager"
addon_version = "v1.9.1"
chart_name = "cert-manager"
chart_version = "v1.12.3"
repository = "cert-manager"
file_path = "file://../artifacts/cert-manager/custom_values.yaml"
depends_on = []
}
"addon2" = {
name = "ingress-nginx"
namespace = "ingress-nginx"
addon_version = "v1.3.1"
chart_name = "ingress-nginx"
chart_version = "4.2.5"
repository = "nginx-controller"
file_path = null
depends_on = ["cert-manager"]
}
}
# Repository specific variables
public_repositories = {
"nginx-controller" = {
type = "Helm"
endpoint = "https://kubernetes.github.io/ingress-nginx"
}
"cert-manager" = {
type = "Helm"
endpoint = "https://charts.jetstack.io"
}
}
# Override config
overrides_config = {
"ingress-nginx" = {
override_addon_name = "ingress-nginx"
override_values = <<-EOT
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
EOT
}
}
- Open the terminal or command line.
- Navigate to the
terraform/pas_terraform/aks
folder. - Run
terraform apply
. Enteryes
when prompted. - The cluster should now show that it is upgrading.
- The upgrade process takes about 30-40 minutes to complete.
Step 4: Verify Cluster Upgrade¶
Once the cluster upgrade is complete, your control plane and nodes should be on a later release.
- Navigate to the Clusters Upgrade Jobs tab. You should see that the upgrade job has completed and the cluster is now running the later K8s version.
Recap¶
Congratulations! At this point, you have
- Successfully configured and provisioned a new nodegroup for your EKS cluster
- Upgraded your cluster's control plane and nodegroups to a later K8s version