Header Ad

Friday, 30 December 2016

How To Configure Gluster, Samba, CTDB Integeration in Redhat 7.XX & CentOS 7.XX

Follow the tutorial to configure the HA file storage using GlusterFS to replicated the data between no of servers. CTDB used for highly available Samba share.

Prerequisties:

Two servers (physical or virtual) RHEL7 or CentOS7 started linux installation root partition size minimum 16GB leave maximum disk space for shared storage here i have used XFS filesystem everything will be fine.
Started the configuration from 2 servers. 

Server01 = Filestore01  -- 10.0.18.10
Server02 = Filestore02  -- 10.0.18.11


Here  i have started without DNS this case to avoid the dns check add entry for both hosts.

# echo "10.0.18.10 Filestore01" >> /etc/hosts

# echo "10.0.18.11 Filestore02" >> /etc/hosts

Filesystem Creation.

Here we have a balance parition to create a gluster environment disk.

If you have not created the parition follow the instruction to create a filesystem.

# fdisk /dev/sda

# mkfs.xfs /dev/sda1

Mount the newly created partition to following below steps.

# mkdir  -p /gluster/bricks/store01

# mount /dev/sda1 /gluster/bricks/store01

Mounted Successfully just add the fstab entry for permanent mount from boot time.

# echo "/dev/sda1 /gluster/bricks/store01 xfs default 0 0" >> /etc/fstab

Execute the same steps to mount the partition from server02

Start Gluster Setup

We have a filesystems to bring gluster to inegrate with mounted volume  /gluster/bricks/store01.
gluster have a multiple bricks. multiple servers can grouped together to provide a similar as RAID.

Following setup we have a two servers, both servers having replicated gluster volume i have disabled SELINUX and FIREWALLD for this setup.

Gluster Installation on both servers.

# cd /etc/yum.repos.d/

# wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo

# yum install glusterfs-server -y

# systemctl enable glusterd.service

# systemctl start  glusterd.service

Lets enable the communication between the servers.

# gluster peer probe filestore02

Create brick volume in our gluster setup, execute below command from both the servers.

# mkdir -p /gluster/bricks/store01/brick1

Now everything prepared to create a gluster volume to using below command on server1.

# gluster vol create store01 replica 2 Filestore01:/gluster/bricks/store01/brick1 Filestore02:/gluster/bricks/store01/brick1

After execution the command Gluster volume named store01 with 2 replicas if this command returns ok.
start the gluster volume.

# gluster vol start store01

Volume started check the status of volume.

# gluster vol info store01

Mounting

lets create a directory on both servers to mount the volume.

# mkdir /store/store01 -p

Ensure the glusterfs client tools are installed.

yum -y install glusterfs-fuse

Now lets mount the volume.

# mount -t glusterfs Filestore01:store01 /data/store01

Fstab entry to mount from boot.

# echo "Filestore01:store01 /data/store01 glusterfs defaults 0 0" >> /etc/fstab

Repeat same steps on Filestore2

# mount -t glusterfs Filestore2:store01 /data/store01

# echo "Filestore2:store01 /data/store01 glusterfs defaults 0 0" >> /etc/fstab

Test the gluster volume to create a files or directory from the server where mounted the disk and check the created files to access the files on both server.if you get the files to access Gluster setup is complete.


CTDB, SAMBA

Clustered tdb database management utility will present storage via cifs, also create a (Virtual IP)
after created will do integrate with the active directory server.

# yum install -y ctdb samba samba-common samba-winbind-clients

Backup default ctdb config file before changes.

# mv /etc/sysconfig/ctdb{,.old}

Create CTDB lock and create a shared are.

# mkdir /data/store01/lock

# mkdir /data/store01/share


Create a ctdb file from your favourite editor to add following lines.

vi /data/store01/lock/ctdb

CTDB_RECOVERY_LOCK=/data/store01/lock/lockfile
#CIFS only
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_MANAGES_SAMBA=yes
#CIFS only
CTDB_NODES=/etc/ctdb/nodes

Create symlink on both hosts.

# ln -s /data/store01/lock/ctdb /etc/sysconfig/ctdb

Stop samba service from boot from both nodes.

# systemctl stop smb.service

# systemctl disable smb.service

# systemctl enable ctdb.service

Create new VIP for load balancing  from both the nodes.

# vi /data/store01/lock/public_addresses

10.0.18.12/24 ens160

We need to create nodes which contains the IP addresses of all servers which will present the storage.

# vi /data/store01/lock/nodes
10.0.18.10
10.0.18.11

Create symlink for those files from both servers.

# ln -s /data/store01/lock/nodes   /etc/ctdb/nodes
# ln -s /data/store01/lock/public_addresses /etc/ctdb/public_addresses

Change the samba configuration file to enable clustering part to access samba shared storage.

Two kind of samba storage normal one and AD integeration

On server1

# cp /etc/samba/smb.conf   /data/store01/lock/smb.conf

Normal samba share.

# vi /data/store01/smb.conf

clustering = yes
idmap backend = tdb2
private dir = /data/store01/lock
[oracle_files] comment = Gluster and CTDB based share
path = /data/store01/share
readonly = no
guest ok = yes
valid users = user01

Completed with the above samba configuration copied to the exact location, on both hosts.

# cp /data/store01/lock/smb.conf /etc/samba/

Add user user01 on both hosts

# useradd user01
# smbpasswd -a user01

Configuration done start the ctdb services from both hosts.service started with out errors check the ctdb status.

# systemctl start ctdb.service
# ctdb status

In status both nodes get ok share will now accessible from windows pc any thing can access via SMB/CIFS
\\10.0.18.12\share


AD integration setup shared in next post.


 

No comments:

Post a Comment