Skip to content

Expand

Expand Storage

In this part, you will add additional raw storage devices to the storage nodes to expand the usable storage capacity of the Rook Ceph Cluster.


Step 1: Confirm Attached Storage

First, you will confirm the storage devices attached to your storage nodes.

  • Open a shell within each storage node in the cluster
  • Execute the following command to list all block devices

lsblk -a
In the below output, we can see one 1TB device with name 'sdb' that is being used by the Ceph cluster.

root@rook1-2:/home/ubuntu# lsblk -a
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0     7:0    0  55.7M  1 loop /snap/core18/2823
loop1     7:1    0  55.7M  1 loop /snap/core18/2829
loop2     7:2    0  63.9M  1 loop /snap/core20/2318
loop3     7:3    0    87M  1 loop /snap/lxd/28373
loop4     7:4    0  77.3M  1 loop /snap/oracle-cloud-agent/72
loop5     7:5    0  38.8M  1 loop /snap/snapd/21759
loop6     7:6    0     0B  0 loop
loop7     7:7    0     0B  0 loop
sda       8:0    0  46.6G  0 disk
├─sda1    8:1    0  46.5G  0 part /var/lib/kubelet/pods/dafb3962-637b-4503-b0b6-83dc386a30a4/volume-subpaths/tigera-ca-bundle/calico-node/1
│                                 /var/lib/kubelet/pods/53e89d38-ee87-46c6-b83f-95a258013a11/volume-subpaths/tigera-ca-bundle/calico-typha/1
│                                 /
├─sda14   8:14   0     4M  0 part
└─sda15   8:15   0   106M  0 part /boot/efi
sdb       8:16   0     1T  0 disk
├─sdb1    8:17   0    56G  0 part
└─sdb3    8:19   0 795.2G  0 part
nbd0     43:0    0     0B  0 disk
nbd1     43:32   0     0B  0 disk
nbd2     43:64   0     0B  0 disk
nbd3     43:96   0     0B  0 disk
nbd4     43:128  0     0B  0 disk
nbd5     43:160  0     0B  0 disk
nbd6     43:192  0     0B  0 disk
nbd7     43:224  0     0B  0 disk
nbd8     43:256  0     0B  0 disk
nbd9     43:288  0     0B  0 disk
nbd10    43:320  0     0B  0 disk
nbd11    43:352  0     0B  0 disk
nbd12    43:384  0     0B  0 disk
nbd13    43:416  0     0B  0 disk
nbd14    43:448  0     0B  0 disk
nbd15    43:480  0     0B  0 disk
  • In the console, navigate to your project
  • Select Infrastructure -> Clusters
  • Click the cluster name on the cluster card
  • Click the "Resources" tab
  • Select "Pods" in the left hand pane
  • Select "rook-ceph" from the "Namespace" dropdown
  • Enter "rook-ceph-tools" into the search box
  • Click the "Actions" button
  • Select "Shell and Logs"
  • Click the "Exec" icon to open a shell into the container
  • Enter the following command in the shell to check the status of the Ceph cluster
ceph status

You will see that the storage capacity of the cluster is currently 3.0 TiB.

Validate Initial Storage


Step 2: Add Storage Device

Next, you will add a raw storage device to the storage nodes.

  • Add a raw storage device to the storage nodes
  • Open a shell within each storage node in the cluster
  • Execute the following command to list all block devices

lsblk -a
In the below output, we can see a second 1TB device with name 'sdc' that was added.

root@rook1-2:/home/ubuntu# lsblk -a
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0     7:0    0  55.7M  1 loop /snap/core18/2823
loop1     7:1    0  55.7M  1 loop /snap/core18/2829
loop2     7:2    0  63.9M  1 loop /snap/core20/2318
loop3     7:3    0    87M  1 loop /snap/lxd/28373
loop4     7:4    0  77.3M  1 loop /snap/oracle-cloud-agent/72
loop5     7:5    0  38.8M  1 loop /snap/snapd/21759
loop6     7:6    0     0B  0 loop
loop7     7:7    0     0B  0 loop
sda       8:0    0  46.6G  0 disk
├─sda1    8:1    0  46.5G  0 part /var/lib/kubelet/pods/dafb3962-637b-4503-b0b6-83dc386a30a4/volume-subpaths/tigera-ca-bundle/calico-node/1
│                                 /var/lib/kubelet/pods/53e89d38-ee87-46c6-b83f-95a258013a11/volume-subpaths/tigera-ca-bundle/calico-typha/1
│                                 /
├─sda14   8:14   0     4M  0 part
└─sda15   8:15   0   106M  0 part /boot/efi
sdb       8:16   0     1T  0 disk
├─sdb1    8:17   0    56G  0 part
└─sdb3    8:19   0 795.2G  0 part
sdc       8:32   0     1T  0 disk
nbd0     43:0    0     0B  0 disk
nbd1     43:32   0     0B  0 disk
nbd2     43:64   0     0B  0 disk
nbd3     43:96   0     0B  0 disk
nbd4     43:128  0     0B  0 disk
nbd5     43:160  0     0B  0 disk
nbd6     43:192  0     0B  0 disk
nbd7     43:224  0     0B  0 disk
nbd8     43:256  0     0B  0 disk
nbd9     43:288  0     0B  0 disk
nbd10    43:320  0     0B  0 disk
nbd11    43:352  0     0B  0 disk
nbd12    43:384  0     0B  0 disk
nbd13    43:416  0     0B  0 disk
nbd14    43:448  0     0B  0 disk
nbd15    43:480  0     0B  0 disk

In order for the new device to be detected by Rook, you can manually restart the Rook Operator pod to trigger the detection of the newly added volume.

  • Execute the following command to restart the Operator pod
kubectl rollout restart -n rook-ceph deployment rook-ceph-operator

If you would like for this detection process to be automatic, you can set the following values in the Helm chart values configuration.

# -- Enable discovery daemon
enableDiscoveryDaemon: true
# -- Set the discovery daemon device discovery interval (default to 60m)
discoveryDaemonInterval: 60m
  • Execute the following command to list all block devices again
lsblk -a

You will see that the added devices now have a partition associated with it.

root@rook1-2:/home/ubuntu# lsblk -a
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0     7:0    0  55.7M  1 loop /snap/core18/2823
loop1     7:1    0  55.7M  1 loop /snap/core18/2829
loop2     7:2    0  63.9M  1 loop /snap/core20/2318
loop3     7:3    0    87M  1 loop /snap/lxd/28373
loop4     7:4    0  77.3M  1 loop /snap/oracle-cloud-agent/72
loop5     7:5    0  38.8M  1 loop /snap/snapd/21759
loop6     7:6    0     0B  0 loop
loop7     7:7    0     0B  0 loop
sda       8:0    0  46.6G  0 disk
├─sda1    8:1    0  46.5G  0 part /var/lib/kubelet/pods/dafb3962-637b-4503-b0b6-83dc386a30a4/volume-subpaths/tigera-ca-bundle/calico-node/1
│                                 /var/lib/kubelet/pods/53e89d38-ee87-46c6-b83f-95a258013a11/volume-subpaths/tigera-ca-bundle/calico-typha/1
│                                 /
├─sda14   8:14   0     4M  0 part
└─sda15   8:15   0   106M  0 part /boot/efi
sdb       8:16   0     1T  0 disk
├─sdb1    8:17   0    56G  0 part
└─sdb3    8:19   0 795.2G  0 part
sdc       8:32   0     1T  0 disk
├─sdc1    8:33   0    56G  0 part
└─sdc3    8:35   0 795.2G  0 part
nbd0     43:0    0     0B  0 disk
nbd1     43:32   0     0B  0 disk
nbd2     43:64   0     0B  0 disk
nbd3     43:96   0     0B  0 disk
nbd4     43:128  0     0B  0 disk
nbd5     43:160  0     0B  0 disk
nbd6     43:192  0     0B  0 disk
nbd7     43:224  0     0B  0 disk
nbd8     43:256  0     0B  0 disk
nbd9     43:288  0     0B  0 disk
nbd10    43:320  0     0B  0 disk
nbd11    43:352  0     0B  0 disk
nbd12    43:384  0     0B  0 disk
nbd13    43:416  0     0B  0 disk
nbd14    43:448  0     0B  0 disk
nbd15    43:480  0     0B  0 disk
  • In the console, navigate to your project
  • Select Infrastructure -> Clusters
  • Click the cluster name on the cluster card
  • Click the "Resources" tab
  • Select "Pods" in the left hand pane
  • Select "rook-ceph" from the "Namespace" dropdown
  • Enter "rook-ceph-tools" into the search box
  • Click the "Actions" button
  • Select "Shell and Logs"
  • Click the "Exec" icon to open a shell into the container
  • Enter the following command in the shell to check the status of the Ceph cluster
ceph status

You will see that the storage capacity of the cluster is now increased to 4.0 TiB.

Validate Final Storage

Alteratively, you can use the previously deployed Ceph Dashboard to see the expanded storage.


Recap

Congratulations! You have successfully expanded the usable storage capacity of your rook ceph cluster.