Setup & Setting up GlusterFS Volumes in RHEL

A volume is a logical collection of bricks where each brick is an export directory on a server in the trusted storage pool. To create a new volume in your storage environment, specify the bricks that comprise the volume. After you have created a new volume, you must start it before attempting to mount it.

Check Setting up Storage for how to set up bricks.

Volume Types:

Volumes of the following types can be created in your storage environment:

Distributed – Distributed volumes distribute files across the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers.

Replicated – Replicated volumes replicate files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical.

Distributed Replicated – Distributed replicated volumes distribute files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments.

Dispersed – Dispersed volumes are based on erasure codes, providing space-efficient protection against disk or server failures. It stores an encoded fragment of the original file to each brick in a way that only a subset of the fragments is needed to recover the original file. The number of bricks that can be missing without losing access to data is configured by the administrator on volume creation time.

Distributed Dispersed – Distributed dispersed volumes distribute files across dispersed subvolumes. This has the same advantages of distribute replicate volumes, but using disperse to store the data into the bricks.

Striped [Deprecated] – Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files.

Distributed Striped [Deprecated] – Distributed striped volumes stripe data across two or more nodes in the cluster. You should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical.

Distributed Striped Replicated [Deprecated] – Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

Striped Replicated [Deprecated] – Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

####On first node:
install glusterfs RPMs:

[root@glustertest1]# yum localinstall glusterfs-server-3.12.15-1.el7.x86_64.rpm glusterfs-libs-3.12.15-1.el7.x86_64.rpm glusterfs-api-3.12.15-1.el7.x86_64.rpm glusterfs-cli-3.12.15-1.el7.x86_64.rpm glusterfs-client-xlators-3.12.15-1.el7.x86_64.rpm glusterfs-fuse-3.12.15-1.el7.x86_64.rpm glusterfs-3.12.15-1.el7.x86_64.rpm userspace-rcu-0.10.0-3.el7.x86_64.rpm userspace-rcu-devel-0.10.0-3.el7.x86_64.rpm

create a new file system for gluster:

[root@glustertest1]# pvcreate /dev/sdb
[root@glustertest1]# vgcreate vgglusterfs /dev/sdb
[root@glustertest1]# lvcreate -L95g -n lvglusterfs vgglusterfs
[root@glustertest1]# mkfs.ext4 /dev/mapper/vgglusterfs-lvglusterfs
[root@glustertest1]# mount /dev/mapper/vgglusterfs-lvglusterfs /glusterfs

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgglusterfs-lvglusterfs 94G 2.3G 87G 3% /glusterfs

add all gluster nodes to /etc/hosts

[root@glustertest1]# vi /etc/hosts

10.131.65.142 glustertest1.unixtest.com glustertest1
10.131.65.143 glustertest2.unixtest.com glustertest2
10.131.65.144 glustertest3.unixtest.com glustertest3
10.131.65.145 glustertest4.unixtest.com glustertest4
10.131.65.146 glustertest5.unixtest.com glustertest5
10.131.65.147 glustertest6.unixtest.com glustertest6

starting gluster service:

systemctl start glusterd
systemctl enable glusterd

probing nodes:

[root@glustertest1]# gluster peer probe glustertest2
[root@glustertest1]# gluster peer probe glustertest3
[root@glustertest1]# gluster peer probe glustertest4
[root@glustertest1]# gluster peer probe glustertest5
[root@glustertest1]# gluster peer probe glustertest6

checking pools and peers status:

[root@glustertest1]# gluster pool list

UUID Hostname State
9d4fba33-2d52-45cd-8e04-817f9d0d599a glustertest2 Connected
b29d6ed9-b88e-4c5b-84c0-c9e3863b0c79 glustertest3 Connected
c035f1b0-727f-4987-9a71-ccf6aafea15d glustertest4 Connected
e26ffe01-3bc5-4627-a50d-8dd683007a5e glustertest5 Connected
8e6fc5ae-7329-4649-836e-8a3c0e57ad0e glustertest6 Connected
94ade168-f752-4b15-aade-2641bdd0725b localhost Connected

[root@glustertest1]# gluster peer status

Number of Peers: 5

Hostname: glustertest2
Uuid: 9d4fba33-2d52-45cd-8e04-817f9d0d599a
State: Peer in Cluster (Connected)

Hostname: glustertest3
Uuid: b29d6ed9-b88e-4c5b-84c0-c9e3863b0c79
State: Peer in Cluster (Connected)

Hostname: glustertest4
Uuid: c035f1b0-727f-4987-9a71-ccf6aafea15d
State: Peer in Cluster (Connected)

Hostname: glustertest5
Uuid: e26ffe01-3bc5-4627-a50d-8dd683007a5e
State: Peer in Cluster (Connected)

Hostname: glustertest6
Uuid: 8e6fc5ae-7329-4649-836e-8a3c0e57ad0e
State: Peer in Cluster (Connected)

create new Volume(Dispersed Volume with Six Storage Servers):

[root@glustertest1]# gluster volume create test_vol disperse-data 4 redundancy 2 transport tcp glustertest1:/glusterfs/brick1 glustertest2:/glusterfs/brick1 glustertest3:/glusterfs/brick1 glustertest4:/glusterfs/brick1 glustertest5:/glusterfs/brick1 glustertest6:/glusterfs/brick1 force
volume create: test_vol: success: please start the volume to access data

check the status of glusterfs:

[root@glustertest1]# gluster volume status
Volume test_vol is not started

[root@glustertest1]# gluster volume start test_vol
volume start: test_vol: success

[root@glustertest1]# gluster volume heal test_vol enable
Enable heal on volume test_vol has been successful

[root@glustertest1]# gluster volume status

Status of volume: test_vol

Gluster process TCP Port RDMA Port Online Pid

Brick glustertest1:/glusterfs/brick1 49152 0 Y 5376
Brick glustertest2:/glusterfs/brick1 49152 0 Y 7957
Brick glustertest3:/glusterfs/brick1 49152 0 Y 5332
Brick glustertest4:/glusterfs/brick1 49152 0 Y 5221
Brick glustertest5:/glusterfs/brick1 49152 0 Y 6811
Brick glustertest6:/glusterfs/brick1 49152 0 Y 5326
Self-heal Daemon on localhost N/A N/A Y 5305
Self-heal Daemon on glustertest2 N/A N/A Y 7978
Self-heal Daemon on glustertest4 N/A N/A Y 4937
Self-heal Daemon on glustertest6 N/A N/A Y 5201
Self-heal Daemon on glustertest5 N/A N/A Y 6832
Self-heal Daemon on glustertest3 N/A N/A Y 5122

Task Status of Volume test_vol

There are no active volume tasks

[root@glustertest1]# gluster volume info

Volume Name: test_vol
Type: Disperse
Volume ID: b23f6c11-8dfe-4d51-919d-35a30dd75100
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: glustertest1:/glusterfs/brick1
Brick2: glustertest2:/glusterfs/brick1
Brick3: glustertest3:/glusterfs/brick1
Brick4: glustertest4:/glusterfs/brick1
Brick5: glustertest5:/glusterfs/brick1
Brick6: glustertest6:/glusterfs/brick1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
cluster.disperse-self-heal-daemon: enable

mount glusterFS:

[root@glustertest1]# mount -t glusterfs glustertest1:/test_vol /mnt

Filesystem Size Used Avail Use% Mounted on
glustertest1:/test_vol 374G 9.0G 346G 3% /mnt

add glusterfs to fstab(enableing auto mount)

[root@glustertest1]# vi /etc/fstab
/dev/mapper/vgglusterfs-lvglusterfs /glusterfs ext4 defaults 0 0
:/test_vol /mnt glusterfs defaults,_netdev 0 0

####On other nodes:

yum localinstall glusterfs-server-3.12.15-1.el7.x86_64.rpm glusterfs-libs-3.12.15-1.el7.x86_64.rpm glusterfs-api-3.12.15-1.el7.x86_64.rpm glusterfs-cli-3.12.15-1.el7.x86_64.rpm glusterfs-client-xlators-3.12.15-1.el7.x86_64.rpm glusterfs-fuse-3.12.15-1.el7.x86_64.rpm glusterfs-3.12.15-1.el7.x86_64.rpm userspace-rcu-0.10.0-3.el7.x86_64.rpm userspace-rcu-devel-0.10.0-3.el7.x86_64.rpm
pvcreate /dev/sdb
vgcreate vgglusterfs /dev/sdb
lvcreate -L95g -n lvglusterfs vgglusterfs
mkfs.ext4 /dev/mapper/vgglusterfs-lvglusterfs
mount /dev/mapper/vgglusterfs-lvglusterfs /glusterfs
vi /etc/hosts
systemctl start glusterd
systemctl enable glusterd
gluster volume status
mount -t glusterfs glustertest[2-6]:/test_vol /mnt

add glusterfs to fstab(enable auto mount)

vi /etc/fstab


/dev/mapper/vgglusterfs-lvglusterfs /glusterfs ext4 defaults 0 0
:/test_vol /mnt glusterfs defaults,_netdev 0 0

Leave a Comment