Skip to content

Knowledge Base Articles

Change IP of Single Node MKS Cluster

To update the IP address of a single node in the MKS cluster in Kubernetes, follow the steps below:

  1. Ensure to have a healthy cluster with one node and download etcd ca key
    $ salt-call pillar.item pki:etcd_certs:ca_key 2>&1 | sed -n '/BEGIN RSA PRIVATE KEY/,/END RSA PRIVATE KEY/p' | sed -e 's/^[ \t]*//' > /etc/etcd-certs/etcd/ca.key
    
    If multi minion is enabled, run the below command
    $ salt-call -c /opt/rafay/salt/etc/salt pillar.item pki:etcd_certs:ca_key 2>&1 | sed -n '/BEGIN RSA PRIVATE KEY/,/END RSA PRIVATE KEY/p' | sed -e 's/^[ \t]*//' > /etc/etcd-certs/etcd/ca.key
    
  2. Replace old IP in consul config file /etc/consul.d/config.json and restart consul
    $ systemctl restart consul
    
  3. Update etcd file /etc/default/etcd and replace old IP with new IP
  4. Re-issue etcd certificates with new IP
    $ mkdir -p /tmp/root/etc
    $ cp -rp /etc/kubernetes /tmp/root/etc
    $ cp -rp /etc/etcd-certs /tmp/root/etc
    $ cp -p /etc/etcd-certs/etcd/ca.key /etc/kubernetes/pki/etcd
    $ kubeadm init phase certs etcd-peer --rootfs /tmp/root
    $ kubeadm init phase certs etcd-server --rootfs /tmp/root
    $ kubeadm init phase certs etcd-healthcheck-client --rootfs /tmp/root
    
  5. Verify if new certificate & key files generated under /tmp/root/etc/kubernetes/pki/etcd
  6. Copy new etcd certs to /etc/etcd-certs
    $ rm -rf /etc/etcd-certs/etcd/server.* /etc/etcd-certs/etcd/peer.* /etc/etcd-certs/etcd/healthcheck-client.*
    $ cp -rp /tmp/root/etc/kubernetes/pki/etcd/{server.crt,server.key,peer.crt,peer.key,healthcheck-client.crt,healthcheck-client.key} /etc/etcd-certs/etcd/
    $ chown etcd /etc/etcd-certs/etcd/*
    $ systemctl restart etcd
    
  7. Update file /etc/default/kubelet and replace old IP with new IP.
  8. Restart kubelet
    $ systemctl restart kubelet
    
  9. Update all manifests files under /etc/kubernetes/manifests, and replace old IP with new IP
  10. Re-issue apiserver certificate & key
    $ rm -rf /tmp/root/etc/kubernetes/pki/apiserver.*  /etc/kubernetes/pki/apiserver.*
    $ kubeadm init phase certs apiserver --rootfs /tmp/root --apiserver-cert-extra-sans k8master.service.consul
    $ cp -p /tmp/root/etc/kubernetes/pki/apiserver.*  /etc/kubernetes/pki/
    
    Wait for all control plane pods to come up and ensure Kubectl commands are now operational.
  11. Update the Canal configuration to use the new IP. The command, with an example IP prefix of 10.230, is provided below:
    $ kubectl -n kube-system patch configmap canal-config -p "{\"data\":{\"canal_iface_regex\":\"10.230.\"}}"
    

    Note: Perform step 11 only if tigera operator pod is not running in kube-system namespace

  12. If tigera operator pod running in kube-system namespace, run the below command and update field spec.calicoNetwork.nodeAddressAutodetectionV4.cidrs to cover new IP range of the nodes
    $ kubectl edit installation default
    
  13. Contact Rafay to update this IP address in the database

Note: CNI changes can differ between versions