This is Part 1 of a multi-part, self-paced quick start exercise that will focus on provisioning an AKS cluster in Azure using the web console, RCTL CLI, or Terraform.
Cloud credentials provide the controller with privileges to programmatically interact with your Azure account so that it can manage the lifecycle of infrastructure associated with AKS clusters.
Follow the step-by-step instructions to setup Azure and obtain the required credentials.
Follow the step-by-step instructions to create an Azure cloud credential on the controller.
Validate the newly created cloud credential to ensure it is configured correctly.
In this step, you will configure and customize your Azure AKS Cluster using either the web console, the RCTL CLI with a YAML based cluster specification, or Terraform with some configuration files.
Select a method to provision and manage your AKS cluster from the tabs below.
Navigate to the previously created project in your Org
Select Infrastructure -> Clusters
Click "New Cluster"
Select "Create a New Cluster"
Click "Continue"
Select "Public Cloud"
Select "Azure"
Select "Azure AKS"
Enter a cluster name
Click "Continue"
Enter the "Resource Group" where the cluster will be created
Select the previously created "Cloud Credentials"
Select the Azure Region for the cluster
Select the K8S Version for the cluster
Select the "default-aks" blueprint
Click "Save Changes"
Click "Provision"
Provisioning will take approximately 10 minutes to complete. The final step in the process is the blueprint sync for the default blueprint. This can take a few minutes to complete because this requires the download of several container images and deployment of monitoring and log aggregation components.
Save the below specification file to your computer as "aks-cluster-basic.yaml". Note, the highlighted sections in the spec will need to be updated to match your environment.
apiVersion:infra.k8smgmt.io/v3kind:Clustermetadata:# The name of the clustername:aks-get-started-cluster# The name of the project the cluster will be created inproject:defaultprojectspec:blueprintConfig:# The name of the blueprint the cluster will usename:default-aks# The name of the cloud credential that will be used to create the cluster cloudCredentials:azure-ccconfig:kind:aksClusterConfigmetadata:# The name of the clustername:aks-get-started-clusterspec:managedCluster:apiVersion:"2022-07-01"identity:# The identity type the AKS cluster will use to access Azure resourcestype:SystemAssigned# The Azure geo-location where the resources will residelocation:centralindiaproperties:apiServerAccessProfile:# Make network traffic between the API server and node pools on a private networkenablePrivateCluster:true# DNS name prefix of the Kubernetes API server FQDN dnsPrefix:aks-get-started-cluster-dns# The Kubernetes version that will be installed on the clusterkubernetesVersion:1.29.4networkProfile:loadBalancerSku:standard# Network plugin used for building the Kubernetes network. Valid values are azure, kubenet, nonenetworkPlugin:kubenetsku:# The name of a managed cluster SKUname:Basic# If not specified, the default is Free. See uptime SLA for more details. Valid values are Paid, Freetier:Freetype:Microsoft.ContainerService/managedClustersnodePools:-apiVersion:"2022-07-01"# The Azure geo-location where the node pools will residelocation:centralindia# The name of the node poolname:primaryproperties:# The desired number of nodes that can run in the node poolcount:1# Whether to enable auto-scalerenableAutoScaling:true# The maximum number of nodes that can run in the node poolmaxCount:1# The maximum number of pods that can run on a nodemaxPods:40# The minimum number of nodes that can run in the node poolminCount:1mode:System# The kubernetes version that will run on the node poolorchestratorVersion:1.29.4# The operating system type that the nodes in the node pool will runosType:Linux# Valid values are VirtualMachineScaleSets, AvailabilitySettype:VirtualMachineScaleSets# The size of the VMs that the nodes will run onvmSize:Standard_DS2_v2type:Microsoft.ContainerService/managedClusters/agentPools# The resource group where the cluster will be createdresourceGroupName:Resource-GroupproxyConfig:{}type:aks
Update the following sections of the specification file with details to match your environment
Update the name and project sections with the name of the cluster and the name of the project in your organization
Update the identity type to use either a System-assigned or User-assigned managed identity
Below is the configuration needed for a System-assigned managed identity in which Azure automatically creates an identity for the AKS cluster and assigns it to the underlying Azure resources.
type:SystemAssigned
Below is a sample configuration needed for a User-assigned managed identity. The user identity will need to be updated to match an identity in your environment. With user-assigned managed identities, you have the flexibility to create and manage identities independently from the AKS cluster. These identities can then be associated with one or more AKS clusters, enabling seamless identity reuse across multiple AKS clusters or other Azure resources.
Login to the web console and view the cluster being provisioned
Provisioning will take approximately 10 minutes to complete. The final step in the process is the blueprint sync for the default blueprint. This can take a few minutes to complete because this requires the download of several container images and deployment of monitoring and log aggregation components.
Once the cluster finishes provisioning, download the cluster configuration file and compare it to the specification file used to create the cluster. The two files will match.
Go to Clusters -> Infrastructure.
Click on the Settings Icon for the newly created cluster and select "Download Cluster Config"
Make sure the following are installed or available.
The Kubernetes management operator automatically deployed on the cluster by the controller will "maintain a heartbeat" with the controller and will "proactively monitor" the status of the components on the worker node required for communication with the control plane and the controller.
Cluster reachability should be not more than 1 minute
Your AKS Cluster's API Server is private and secure (i.e. cloaked and not directly reachable on the Internet). The controller provides a zero trust kubectl channel for authorized users.
Click the "Kubectl" button on the cluster card.
This will launch a web based kubectl shell for you to securely interact with the API server over a zero trust channel
The default cluster blueprint automatically deploys Prometheus and other components required to monitor the AKS cluster. This data is aggregated from the cluster on the controller in a central, time series database. This data is then made available to administrators in the form of detailed dashboards.
The dashboard also comes with an integrated Kubernetes dashboard. Click on "Resources" and you will be presented with all the Kubernetes resources organized using a number of filters.