DRBD is a distributed replicated storage system for the Linux platform. It is implemented as a kernel driver, several user-space management applications, and some shell scripts.
DRBD is traditionally used in high availability (HA) computer clusters, but beginning with DRBD version 9, it can also be used to create larger software defined storage pools with a focus on cloud integration.
(From the DRBD Homepage at: https://www.linbit.com/drbd/)
NetEye Clustering uses DRBD as shared storage between nodes, allowing it to be independent of an external storage engine. But now if you want to extend your cluster by adding another node, you may come across the problem that your DRBD devices were created with a max-peers value that’s too low.
To see what max-peers value your DRBD device has, you can execute these commands on a node where the device is in Secondary status:
drbdadm down <resource-name>
drbdadm apply-al <resource-name>
drbdadm dump-md <resource-name> | grep max-peer
If your max-peers value is lower than nodes-1 (where nodes is the number of total nodes you want to have in the cluster) then you have a problem, since the node can only be added if you change that value in the DRBD metadata.
In the following paragraphs I will assume that the DRBD resource name is called eventhandler, the LVM device used is /dev/vg00/lveventhandler_drbd and I want to extend max-peers to 7 so as to be able to add additional nodes in the future.
To change the metadata your DRBD has to be shut down, so disable the cluster resource that mounts your DRBD device, or if it is now a cluster resource then just unmount the device and check that it is Secondary on ALL nodes.
On a NetEye cluster for instance, you can use this command to disable a particular DRBD resource along with its entire resource group (this example assumes the eventhandler resource group mounts /neteye/shared/eventhandler):
pcs resource disable eventhandler_group
After executing this command the DRBD device should be Secondary on all nodes. You can check this by executing this command:
# drbdadm status eventhandler
As you can see, the DRBD resource should be Secondary and UpToDate on ALL nodes!!! If this checks out, then you can proceed to shut down ALL nodes:
node1# drbdadm down eventhandler
node2# drbdadm down eventhandler
Now proceed with these commands, executing them on ALL nodes:
lvextend -L +40M /dev/vg00/lveventhandler_drbd
drbdadm apply-al eventhandler
drbdadm create-md eventhandler/0 --max-peers 7
ATTENTION: Depending on the size of your LVM device you might need to extend the partition by an additional 40M, but generally that should work. I had a partition of 2TB and there I added 1G to the volume.
On the former Primary node of the DRBD device, execute these commands:
drbdadm up eventhandler
drbdadm primary eventhandler --force
On the former Secondary nodes of the DRBD device, execute this command:
drbdadm up eventhandler
Now check if your DRBD device is syncing using:
drbdadm status eventhandler
Then return the DRBD device to secondary status on the Primary node to be able to then reactivate the cluster resource group:
drbdadm secondary eventhandler
After this your DRBD device should be okay, and you can reactivate your cluster resource group:
pcs resource enable eventhandler_group
By using the above commands you will be able to change the max-peers value in the DRBD metadata of the device. Check the new max-peers value again with the first set of commands shown at the top.