Skip to content

Steps

On public clouds such as AWS, the controller can automatically provision and configure the required infrastructure.


Step 1: Create Cloud Credentials

With the Auto Provisioning process, the controller programmatically provisions and configures required infrastructure on AWS. In order to do this, the controller needs to be configured with credentials that will allow the controller to create, configure and decomission infrastructure on the cloud provider.

The creation of a "cloud credential" is a "One Time" task and it can be used to create clusters in the future when required. The credentials in the provider profile are stored encrypted in the Controller. Please review AWS Credentials for additional instructions on how to configure and download the service account credentials for AWS.


Step 2: Cluster Configuration

In this step, you will configure and create a cluster object in the Controller for auto provisioning an upstream k8s cluster in your AWS account using the EC2 Credential created in Step 1 above.

As an Org Admin or Infrastructure Admin for a Project

  • Login into the Web Console and go to Infrastructure > Clusters.
  • Click on “New Cluster”.
  • Select "Create a New Cluster" option
  • Click "Continue" to go to the next configuration page

New EC2 Cluster

  • Select "Public Cloud" for Environment.
  • Select "AWS" option for Cloud provider
  • And select "Upstream Kubernetes" option for Kubernetes Distribution
  • Provide a name for your cluster (the use of underscore is not allowed in the name)
  • Provide an optional description for the cluster
  • Click "Continue" to go to the next configuration page

New EC2 Cluster

  • In General settings, select the cluster blueprint from the "Blueprint" drop down
  • Select the Kubernetes version from the "K8s Version" drop down
  • Then select the EC2 Credential Name created in Step 1 above from the "Cloud Credentials" drop down
  • Select the EC2 region from the "Region" drop down
  • And select the instance type from the "Instance Type" drop down

New EC2 Cluster

  • In the "Advanced" settings, select to enable "High Availability (Multi Master)" if you would like to provision a multi-master cluster.
  • Select "Install GPU Drivers" if the EC2 instance type selected has GPU enabled
  • Click "Continue" to create the cluster

New EC2 Cluster

NOTE: For auto provisioned clusters, the controller automatically programs the cluster with the "region" metadata based on information from the selected region.


Step 3: Cluster Provisioning

At this point, the controller has everything it needs to provision the cluster, test it and make it available for workloads.

Note that manual intervention is NOT REQUIRED unless there is an error or an issue to deal with. The end-to-end process before the cluster is ready for workloads is ~15 minutes.

New EC2 Cluster

Click "Provision" button to start the upstreamk8s cluster creation in your AWS account using the AWS credential you provided during the cluster configuration step. The end-to-end process comprises two distinct steps.


Infra Creation

In this step, the controller uses the provider profile to programmatically create the infrastructure in the selected AWS region with the provided specifications.

Depending on the region selected, the creation and configuration of infrastructure can take ~5 mins. Behind the scenes, the controller automatically creates and configures the following:

  1. VPC
  2. Roles
  3. Elastic IPs
  4. Security Groups
  5. Internet Gateway
  6. NAT Gateway
  7. Subnet Routes
  8. SSH Keys
  9. Instances
  10. Volumes

If the process encounters issues during the infrastructure creation step, everything is undone and a suitable error message is presented to the user. Review the "common issues" section below for details.

New EC2 Cluster


Software Provisioning

Once the necessary infrastructure is successfully created and configured, the workflow automatically transitions to the next step i.e. software provisioning.

Required software components are automatically downloaded, deployed and tested on the individual cluster nodes.

New EC2 Cluster

Once this step is complete, the automated cluster provisioning is performed and the cluster automatically transitions to a "READY" state and can accept workloads.

Auto Provision Cluster


Storage

This is a critical step if your containerized application requires storage.

For upstream Kubernetes clusters on AWS EC2, the CSI driver for Amazon EBS is seamlessy configured and deployed. This CSI creates EBS volumes for PVCs and attaches them to nodes.

NOTE: AWS EBS volumes are AZ bound, so will be the PVCs.

To support EBS volume creation, the AWS CSI driver expects all nodes to be attached with an "IAM instance profile" which grants all nodes with proper permission for EBS volume operations.

All the steps captured below are automatically performed for auto-provisioned clusters.

Step 1: Create a Trust Policy JSON file "ec2-trust-policy.json"

$ cat ec2-trust-policy.json
{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Principal": {"Service": "ec2.amazonaws.com"},
    "Action": "sts:AssumeRole"
  }
}

Step 2: Create a Role Permissions JSON File "k8s-worker-role-permissions.json"

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:AttachVolume",
        "ec2:CreateSnapshot",
        "ec2:CreateTags",
        "ec2:CreateVolume",
        "ec2:DeleteSnapshot",
        "ec2:DeleteTags",
        "ec2:DeleteVolume",
        "ec2:DescribeInstances",
        "ec2:DescribeSnapshots",
        "ec2:DescribeTags",
        "ec2:DescribeVolumes",
        "ec2:DetachVolume"
      ],
      "Resource": "*"
    }
  ]
}

Step 3: Create Role Named "k8s-worker-role" from "ec2-trust-policy.json"

$ aws iam create-role --role-name k8s-worker-role --assume-role-policy-document file://ec2-trust-policy.json

Step 4: Attach Role with Permissions in File "k8s-worker-role-permissions.json"

$ aws iam put-role-policy --role-name k8s-worker-role --policy-name permissions-policy-For-k8s-worker --policy-document file://k8s-worker-role-permissions.json

Step 5: Create instance profile named k8s-worker-profile

$ aws iam create-instance-profile --instance-profile-name k8s-worker-profile

Step 6: Add Role k8s-worker-role to instance profile k8s-worker-profile

$ aws iam add-role-to-instance-profile --role-name k8s-worker-role --instance-profile-name k8s-worker-profile

Step 7: Attach instance profile to the EC2 instance. For example

$ aws ec2 associate-iam-instance-profile --iam-instance-profile Name=k8s-worker-profile --instance-id i-04cfe32fb51a26fa9

Step 4: Cluster De-Provisioning

If you wish to deprovision the auto provisioned upstream k8s cluster in your AWS account, follow the steps below to delete the cluster:

  • Click on the Options icon (i.e. gear) on the far right of the selected cluster.
  • Select Delete to remove the cluster object from the Controller.

The controller will automatically delete all resources created during the provisioning in your AWS account.

Delete Cluster in Controller


Common Issues

Exhaustion of Elastic IPs

Elastic IPs are a limited resource provided by AWS. If the AWS account has hit the upper limit for Elastic IPs, the automated infrastructure creation workflow will fail.

AWS requires their customers to submit requests for additional Elastic IPs. More information here.

Exhaustion of VPCs

AWS limits the default number of VPCs per region to 5 (five). The automated provisioning process will fail if this limit is encountered.

Customers can request additional VPCs by submitting a request. More information here

Instance Launch Issues

If AWS does not have sufficient on demand instance capacity or if you have reached the limit on the number of instances you can launch in a region. More information here.