Here are the steps:
FYI, its not required to detach the peer.
- Stop glusterd process on N1, N2, N3…..
- If you are trying to change the hostname of N1, then in N2 & N3 you’d need to replace the occurrences of old hostname with new hostname in all the files located at /var/lib/glusterd
- Restart GlusterD on all the nodes and check the peer status.
WARNING: Before proceed be sure of backup your data, and proceed with caution. Do not execute any command without understanding what it do exactly, be sure that you are in the correct path, make the correct names replacements in each command if apply (indicated in uppercase), and ensure that all your new peer names are resolvable by DNS.
THE NEXT STEPS MUST BE PERFORMED ON ALL THE NODES
Step 1: stop glusterd service.
systemctl stop glusterd.service
Step 2: list the content of the /var/lib/glusterd/vols directory.
ls -l /var/lib/glusterd/vols
Step 3: renaming volumes data files, for each volume do:
cd /var/lib/glusterd/vols/YOURVOLUMENAME
ls -l | grep .data.vol ###<– gets the list of files you need to rename for the current volume
mv clusterdata.OLDNAME1.data.vol clusterdata.NEWNAME1.data.vol
mv clusterdata.OLDNAME2.data.vol clusterdata.NEWNAME2.data.vol
mv clusterdata.OLDNAME-N.data.vol clusterdata.NEWNAME-N.data.vol
Step 4: renaming volumes bricks, for each volume do:
cd /var/lib/glusterd/vols/YOURVOLUMENAME/bricks
ls -l | grep :-data ###<– gets the list of brick files you need to rename for the current volume
mv OLDNAME1\:-data NEWNAME1\:-data
mv OLDNAME2\:-data NEWNAME2\:-data
mv OLDNAME-N\:-data NEWNAME-N\:-data
Step 5: Detect all the occurrences of the OLDNAME in the config files:
cd /var/lib/glusterd
grep -rnw . -e ‘OLDNAME’
Step 6: Automatically replace all the occurrences of the OLDNAME in the config files:
cd /var/lib/glusterd
find . -type f -exec sed -i ‘s/OLDNAME/NEWNAME/g’ {} \;
Step 7: Check that all the occurrences has been replaced:
grep -rnw . -e ‘OLDNAME’
grep -rnw . -e ‘NEWNAME’
ONLY WHEN YOU HAVE COMPLETED THE STEPS ON ALL NODES …
Start the glusterd service en each node, and check status.
systemctl start glusterd.service
systemctl status glusterd.service
gluster peer status
gluster volume status
gluster volume info