Skip to content

Use Velero

What Will You Do

In this exercise,

  • You will be performing a common operation that is performed during "disaster recovery" or "workload migration across clusters"
  • You will create a backup of the PVC for a workload deployed to a Rafay managed cluster that has a custom blueprint with a velero addon.
  • You will then restore the backup to another cluster
  • Finally, you will verify that the backup data is available in the new cluster.

Assumptions

  • You have already provisioned or imported a Kubernetes cluster using Rafay.
  • You have successfully published a velero addon based cluster blueprint to your cluster.
  • You have access to AWS S3 or MinIO (S3 Compatible Storage)

For this example, we will use a wordpress application.


Step 1: Download Helm chart

Use your helm client to download the latest release of wordpress helm chart file wordpress-x.y.z.tgz to your machine. In this recipe, we used wordpress v9.0.3.

  • Add stable repo to your Helm CLI
helm repo add stable https://kubernetes-charts.storage.googleapis.com
  • Now, fetch the latest Helm chart from this repo.
helm fetch stable/wordpress

Step 2: Customize Values

In this step, we will be creating a custom "values.yaml" file with overrides for our wordpress deployment. Copy the following yaml document into the "wordpress-custom-values.yaml" file.

wordpressUsername: user

## Application password
## Defaults to a random 10-character alphanumeric string if not set
## ref: https://github.com/bitnami/bitnami-docker-wordpress#environment-variables
##
wordpressPassword: "demo!23"

service:
  type: ClusterIP
  ## HTTP Port
  ##
  port: 80
  ## HTTPS Port
  ##
  httpsPort: 443
  ## HTTPS Target Port
  ## defaults to https unless overridden to the specified port.
  ## if you want the target port to be "http" or "80" you can specify that here.
  ##
  httpsTargetPort: https
  ## Metrics Port
  ##
  metricsPort: 9117

ingress:
  ## Set to true to enable ingress record generation
  ##
  enabled: true
  hostname: wordpress.dev.rafay-edge.net
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-km-demo
  tls:
    - secretName: wordpress-tls
      hosts:
        - wordpress.dev.rafay-edge.net

Note

When running on an on-prem cluster or when need for backing up the volumes (where there is no provision for volume snapshots), the entire volumes will be backed up. For this, we need to add annotations to the pods indicating which volumes needs to be backed up. For example, for wordpress, we need to add the following annotation.

podAnnotations:
        backup.velero.io/backup-volumes: wordpress-data
For Maria DB, we need to add the following annotation.
master:
annotations:
  backup.velero.io/backup-volumes: data,config


Step 3: Create Workload

  • Login into the Web Console and navigate to your Project as an Org Admin or Project Admin
  • Under Infrastructure (or Applications if accessed with Project Admin role), select "Namespaces" and create a new namespace called "wordpress"
  • Go to Applications > Workloads
  • Select "New Workload" to create a new workload called "wordpress"
  • Ensure that you select "Helm" for Package Type and select the namespace as "wordpress"
  • Click CONTINUE to next step
  • Upload the Wordpress helm chart wordpress-x.y.z.tgz to the Helm > Choose File
  • Upload the wordpress-custom-values.yaml created file from the previous step to Values.yaml > Choose File
  • Save and Go to Placement
  • In the Placement step, select the cluster(s) that you would like to deploy Wordpress
  • Publish the Wordpress workload to the selected cluster(s)

Step 4: Verify Deployment

You can optionally verify whether the correct resources have been created on the cluster.

  • First, we will verify the Wordpress and MariaDB pod status
kubectl get po -n wordpress
NAME                           READY   STATUS    RESTARTS   AGE
wordpress-59dfcd85cd-vlf9p     1/1     Running   0          3m19s
wordpress-mariadb-0            1/1     Running   0          3m19s
  • Then, we will verify the wordpress and mariadb persistent volume claim status
kubectl get pvc -n wordpress
NAME                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-wordpress-mariadb-0   Bound    pvc-56e16e34-977f-4917-8a1f-8bb87185909e   8Gi        RWO            gp2            3m23s
wordpress                  Bound    pvc-bc4bc952-eb1b-492a-9949-1c4f4719436f   10Gi       RWO            gp2            3m23s

Step 5: Use Workload

Publish Blog To Wordpress


Step 6: Perform Backup

In this step, we will be backing up only the wordpres namespace. Copy the following yaml document into a file called "wordpress-backup.yaml"

apiVersion: velero.io/v1
kind: Backup
metadata:
  name: wordpress-backup
  namespace: velero # must be match velero server namespace
spec:
  includedNamespaces:
  - wordpress
  ttl: 24h0m0s # default 720h0m0s
  storageLocation: aws # backup storage location
  volumeSnapshotLocations:
  - aws
  • Create a workload (type: k8s yaml)
  • Upload the wordpress-backup.yaml file
  • Select your cluster and publish it

Once the workload is published, the "wordpress namespace" will be automatically backed up including the volume snapshots.

kubectl get backup -n velero

NAME                                     AGE
wordpress-backup                         9s

Let's verify if the backup was successfully completed or not.

kubectl describe backup wordpress-backup -n velero
Name:         wordpress-backup
Namespace:    velero
Labels:       rep-organization=5m18rky
              rep-partner=rx28oml
              rep-project=z24wnmy
              rep-workload=wordpress-backup
              velero.io/storage-location=aws
Annotations:  rafay.dev/original:
                {"kind":"Backup","spec":{"ttl":"24h0m0s","storageLocation":"aws","includedNamespaces":["wordpress"],"volumeSnapshotLocations":["aws"]},"me...
              rafay.dev/ownerRef:
                {"apiVersion":"cluster.rafay.dev/v2","kind":"Tasklet","name":"wordpress-backup","uid":"f3683f1a-295f-4e29-b7d9-5738b1de78c1","co...
              velero.io/source-cluster-k8s-gitversion: v1.16.8-eks-e16311
              velero.io/source-cluster-k8s-major-version: 1
              velero.io/source-cluster-k8s-minor-version: 16+
API Version:  velero.io/v1
Kind:         Backup
Metadata:
  Creation Timestamp:  2020-08-20T05:41:31Z
  Generation:          3
  Resource Version:    9532147
  Self Link:           /apis/velero.io/v1/namespaces/velero/backups/wordpress-backup
  UID:                 7a44a6a1-b302-458d-82bb-67a5231acbe1
Spec:
  Included Namespaces:
    wordpress
  Storage Location:  aws
  Ttl:               24h0m0s
  Volume Snapshot Locations:
    aws
Status:
  Completion Timestamp:        2020-08-20T05:41:40Z
  Expiration:                  2020-08-21T05:41:31Z
  Phase:                       Completed
  Start Timestamp:             2020-08-20T05:41:31Z
  Version:                     1
  Volume Snapshots Attempted:  2
  Volume Snapshots Completed:  2
Events:                        <none>

Step 6: Simulate Disaster

kubectl delete ns wordpress

At this point, the wordpress application is no longer accessible.


Step 7: Restore from Backup

Now let us restore the application using the backup taken in the previous step. Copy the following yaml document into the wordpress-restore.yaml

apiVersion: velero.io/v1
kind: Restore
metadata:
  name: wordpress-restore
  namespace: velero
spec:
  backupName: wordpress-backup
  • Create a workload (type: k8s yaml)
  • Upload the wordpress-restore.yaml file
  • Select a cluster and publish it.
  • Once the workload is published, you will notice that the "wordpress namespace" is restored including the PVC's.
kubectl get restore -n velero

NAME                AGE
wordpress-restore   8s

Let's verify the status of our restore.

kubectl describe restore wordpress-restore -n velero
Name:         wordpress-restore
Namespace:    velero
Labels:       rep-organization=5m18rky
              rep-partner=rx28oml
              rep-project=z24wnmy
              rep-workload=wordpress-restore
Annotations:  rafay.dev/original:
                {"kind":"Restore","spec":{"backupName":"wordpress-backup"},"metadata":{"name":"wordpress-restore","labels":{"rep-partner":"rx28o...
              rafay.dev/ownerRef:
                {"apiVersion":"cluster.rafay.dev/v2","kind":"Tasklet","name":"wordpress-restore","uid":"aa3a393f-6eec-414c-aef0-b546f480d509","c...
API Version:  velero.io/v1
Kind:         Restore
Metadata:
  Creation Timestamp:  2020-08-20T06:06:48Z
  Generation:          3
  Resource Version:    9538319
  Self Link:           /apis/velero.io/v1/namespaces/velero/restores/wordpress-restore
  UID:                 26b1ada7-870c-4094-ba28-f60d01f0f2da
Spec:
  Backup Name:  wordpress-backup
  Excluded Resources:
    nodes
    events
    events.events.k8s.io
    backups.velero.io
    restores.velero.io
    resticrepositories.velero.io
Status:
  Phase:  Completed
Events:   <none>

Now let us verify if our application was restored correctly.

kubectl get pvc -n wordpress

NAME                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-wordpress-mariadb-0   Bound    pvc-56e16e34-977f-4917-8a1f-8bb87185909e   8Gi        RWO            gp2            25s
wordpress                  Bound    pvc-bc4bc952-eb1b-492a-9949-1c4f4719436f   10Gi       RWO            gp2            25s
kubectl get po -n wordpress

NAME                           READY   STATUS    RESTARTS   AGE
wordpress-59dfcd85cd-vlf9p     1/1     Running   0          1m34s
wordpress-mariadb-0            1/1     Running   0          1m34s

When the wordpress application is accessed now, you will notice that the data that we previously uploaded is still there.

Published Blog after Disaster


Recap

Congratulations! You successfully backed up an application and restored it back just like you would do in case of a disaster recovery.