This article assume you have two system with RHEL 5.2 X86_64 installed and you want to create a cluster to have High Availability for some services (in this article Apache Web Server).

This article also assume that you have a shared storage accessible from the two system, as for example a Storage Area Network (SAN) Fibre Channel oer iSCSI and you have free space on it.

First of all you need to install on both your systems all needed packages.
For doing this, create a cluster.repo file in /etc/yum.repos.d with the following command

touch /etc/yum.repos.d/cluster.repo

echo [Server] >> /etc/yum.repos.d/cluster.repo
echo name=Server >> /etc/yum.repos.d/cluster.repo
echo baseurl=file:///misc/cd/Server >> /etc/yum.repos.d/cluster.repo
echo enabled=1 >> /etc/yum.repos.d/cluster.repo
echo gpgcheck=0 >> /etc/yum.repos.d/cluster.repo
echo [Cluster] >> /etc/yum.repos.d/cluster.repo
echo name=Cluster >> /etc/yum.repos.d/cluster.repo
echo baseurl=file:///misc/cd/Cluster >> /etc/yum.repos.d/cluster.repo
echo enabled=1 >> /etc/yum.repos.d/cluster.repo
echo gpgcheck=0 >> /etc/yum.repos.d/cluster.repo
echo [ClusterStorage] >> /etc/yum.repos.d/cluster.repo
echo name=ClusterStorage >> /etc/yum.repos.d/cluster.repo
echo baseurl=file:///misc/cd/ClusterStorage >> /etc/yum.repos.d/cluster.repo
echo enabled=1 >> /etc/yum.repos.d/cluster.repo
echo gpgcheck=0 >> /etc/yum.repos.d/cluster.repo

Insert the RHEL 5.2 X86_64 media on you CD/DVD Reader, and run the following command to update yum database :

yum update

If yum can’t use the new repository, check if autofs service is up and running (or start it) with the folowing command :

service autofs restart

At this point you can install all needed packages from create and administer a cluster :

yum groupinstall -y “Cluster Storage” “Clustering”

If you have to use iSCSI initiator (in this How-To I’ll use it) you have to install also the following packages :

yum install -y iscsi-initiator-utils isns-utils

And configure it to start at boot :

chkconfig iscsi on
chkconfig iscsid on

service iscsi start
service iscsid start

In this How-to I’ll use three systems, with this IP Address.
The two “rhel-cluster-nodeX” systems have two NICs, one for production and one for HighAvailability check.

rhel-cluster-node1
192.168.234.201
10.10.10.1

rhel-cluster-node2
192.168.234.202
1010.10.2

rhel-cluster-san
192.168.234.203

What I’m going to do is create a cluster with 192.168.234.200 IP Address who share the service from 192.168.234.201 and 192.168.234.202 machines, and use a GFS filesystem reachable with iSCSI on 192.168.234.203 .

Assuming you have just configured the iSCSI target on the SAN (if you don’t know how to do it, look for another post on thi blog) you must run the following command to check and login to the shared LUN :

iscsiadm -m discovery -t st -p 192.168.234.203

iscsiadm -m node -L all

touch /etc/iscsi/send_targets

echo 192.168.234.203 >> /etc/iscsi/send_targets

For convenience, add to both cluster nodes, the following lines in /etc/hosts :

10.10.10.1 rhel-cluster-node1.mgmt.local rhel-cluster-node1
10.10.10.2 rhel-cluster-node2.mgmt.local rhel-cluster-node2

Be sure that the iSCSI mapped device is /dev/sdb (otherwise change the following commands), then proceed creating a new Phisical Volume, a new Volume Group and a new Logical Volume to use as a shared storage for cluster nodes, by using he following commands :

pvcreate /dev/sdb

vgcreate vg1 /dev/sdb

lvcreate -l 10239 -n lv0 vg1

You’re done, you create a new volume group “vg1″ and a new logical volume “lv0″. The “-l 10239″ parameter is based on the size on my iSCSI shared storage, in this case 40 GB.

At this point you are ready the create the clustered GFS file system on your device using the command below :

gfs_mkfs -p lock_dlm -t rhel-cluster:storage1 -j 8 /dev/vg1/lv0

You’re done, you’ve created a GFS fil system, with locking protocol “lock_dlm” for a cluster called “rhel-cluster” and with name “storage1″, you can use this GFS for a maximum of 8 hosts and you’ve used the /dev/vg1/lvo device.

To administer Red Hat Clusters with Conga, run luci and ricci as follows :

service luci start
service ricci start

Configure the automatic startup for ricci and luci on both systems, using :

chkconfig luci on
chkconfig ricci on

On both systems, initialize the luci server using the luci_admin init command.

service luci stop
luci_admin init

This command create the ‘admin’ user and his password, for doing so follow on screen instruction, and check for an output as the following :

The admin password has been successfully set.
Generating SSL certificates…
The luci server has been successfully initialized

You must restart the luci server for changes to take effect, run the following to do it :

service luci restart

For a correct cluster configuration and maintenance, you have to start (and configure to start at boot) the following services :

chkconfig rgmanager on
service rgmanager start
chkconfig cman on
service cman start

Edit fstab and add

/dev/vg1/lv0 /data gfs defaults,acl 0 0

You can check if all works using the command :

mount -a

try to mount/umount read and write … if all works fine you can continue.

configure apache to use one or more virtual host on folder on the same storage.
for example, on both nodes, add to the end of /etc/httpd/conf/httpd.conf

<VirtualHost *:80>
ServerAdmin webmaster@mgmt.local
DocumentRoot /data/websites/default
ServerName rhel-cluster.mgmt.local
ErrorLog logs/rhel-cluster_mgmt_local-error_log
CustomLog logs/rhel-cluster_mgmt_local-access_log common
</VirtualHost>

For use the example above, you must create two directory under /data,

mkdir /data/websites
mkdir /data/websites/default

and you must create an index file to put on that directory :

touch /data/websites/default/index.html

echo WORKS!!! >> /data/websites/default/index.html

Configure apache to start at booot time and start it with the following commands :

chkconfig httpd on
service httpd start

Point your web browser to https://rhel-cluster-node1:8084 to access luci

1. As administrator of luci, select the cluster tab.
2. Click Create a New Cluster.
3. At the Cluster Name text box, enter cluster name “rhel-cluster.
Add the node name and password for each cluster node.
4. Click Submit. Clicking Submit causes the following actions:
a. Cluster software packages to be downloaded onto each cluster node.
b. Cluster software to be installed onto each cluster node.
c. Cluster configuration file to be created and propagated to each node in the cluster.
d. Starting the cluster.
A progress page shows the progress of those actions for each node in the cluster.
When the process of creating a new cluster is complete, a page is displayed providing a
configuration interface for the newly created cluster.

Managing your newly created cluster you can ad resources.
Add a resource, choose IP Address and use 192.168.234.200

Create a service named “cluster”, add the resource “IP Address” you had created before,
check “Automatically start this service”
check “Run exclusive”
choose “Recovery policy” as “Relocate”

Save the service.

If the service created give no errors, enable it, and try to start it on one cluster node.

The Cluster configuration file would be /etc/cluster/cluster.conf and must looks like similar than the following :

cat /etc/cluster/cluster.conf

<?xml version=”1.0″?>
<cluster alias=”rhel-cluster” config_version=”25″ name=”rhel-cluster”>
<fence_daemon clean_start=”0″ post_fail_delay=”0″ post_join_delay=”3″/>
<clusternodes>
<clusternode name=”rhel-cluster-node2.mgmt.local” nodeid=”1″ votes=”1″>
<fence/>
</clusternode>
<clusternode name=”rhel-cluster-node1.mgmt.local” nodeid=”2″ votes=”1″>
<fence/>
</clusternode>
</clusternodes>
<cman expected_votes=”1″ two_node=”1″/>
<fencedevices/>
<rm>
<failoverdomains/>
<resources>
<ip address=”192.168.234.200″ monitor_link=”0″/>
</resources>
<service autostart=”1″ exclusive=”1″ name=”cluster” recovery=”relocate”>
<ip ref=”192.168.234.200″/>
</service>
</rm>
</cluster>

To check if shared IP Address is working correctly, try the following :

/sbin/ip addr list

The ouput must be similar to the following :

eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:96:8b:ed brd ff:ff:ff:ff:ff:ff
inet 192.168.234.201/24 brd 192.168.234.255 scope global eth0
inet 192.168.234.200/24 scope global secondary eth0
inet6 fe80::20c:29ff:fe96:8bed/64 scope link
valid_lft forever preferred_lft forever

At this point you can shutdown (or disconnect from network) one host and see if the web page on 192.168.234.200 is still reachable.

If all works, you’re done.

This is a very simple cluster, sharing only the IP Address resource, but you can add more resource, more services and configure failover domains and/or fence devices. For doing so, refer to RedHat KnowledgeBase and Documentation on http://www.redhat.com .

Hope this help

Bye
Riccardo

Print This Post Print This Post
image_pdfimage_print
Print Friendly

20 Commenti a “How-To create a Cluster with two RHEL”

  • aliaksei scrive:

    Thank you Riccardo, you give me a big help !
    Best regards

    Aliaksei

  • Hassan scrive:

    Thanks a lot this help me to understand the LVM setup for cluster but it is important to use CLVM to make the GFS available for all the cluster nodes.

    http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Suite_Overview/s1-clvm-overview-CSO.html

  • Artur scrive:

    excellent work, was in trouble when you need to install Luci and Ricci, because when you run yum, he did not think the programs. After this setting made it thought there was anything easier.

  • Cole scrive:

    Why isnt luci in any yum repos? i install ricci but no luci???

  • Manas scrive:

    Hi Richardo,
    First thanks for such cluster info.
    I was trying out your method to my two node redhat cluster on SAn environment.
    I could configure and I could access the apache sever page by my virtual IP .
    But while tried to shutdown or reboot the working node in the cluster failover did not happen.Which means that virtual IP lost its connection and did not do the failover as expected.

    here is my cluster.conf file for your reference :

    OS: RHEL5.2

    Could you please help ne ?

  • VIKAS scrive:

    Nice tutorial !!

    @ manas, I believe you have some problem with the fencing. If it is, you may need to manually do the failover.

  • Eric scrive:

    Thanks for this tutorial!

    Everything works fine for me until the line “service cman start”

    [root@aqsavir21126 /]# service cman start
    Starting cluster:
    Loading modules… done
    Mounting configfs… done
    Starting ccsd… done
    Starting cman… failed
    /usr/sbin/cman_tool: ccsd is not running
    [FAILED]

    I can’t find out what is wrong, any ideas?
    Thanks (OS RHEL 5.5)

    • Riccardo scrive:

      Hi, you should try the following :

      Stop every services of gfs2

      Check that the xml file is correct under the /etc/cluster/cluster.config
      Check that every node are mapped into the file /etc/hosts or resolved by dns system.

      Reboot the system and without the gfs services and openais services launch this on each machine that will be part of the cluster :

      modprobe lock_dlm
      modprobe lock_nolock
      modprobe dlm
      modprobe gfs2

      mount -t configfs none /sys/kernel/config
      ccsd
      cman_tool join
      groupd
      fenced
      fence_tool join
      dlm_controld
      gfs_controld

      after this, check if the cluster is full functionally with :

      cman_tool status
      cman nodes

      In every way, next time, tell me how told you your /var/log/messages

      Bye

  • narsing scrive:

    Thank you very much for sharing…….

    I have a dought regarding fencing devices ..

    if i want to setup clustering at home how to perform that using intel boards..

    Thank you very much in Advance….

  • Arif scrive:

    Thanks for this tutorial…
    is any way to setup clustering on centos without shared storage (scsi).

    • Riccardo scrive:

      Thank you,
      if you want to have a Cluster, I suppose you had to provide high availability.
      So in case of hardware failure you should be able to create no downtime.
      To do so, you must have the same information (i.e. files) on all your clusters machines, and if you don’t have an external storage, you must sync. your servers local storage between the nodes. Take a look at DRBD.
      Bye
      Riccardo

  • rajib_bd scrive:

    Thanks Riccardo for your nice step by step clustering configuration…..
    I have facing a problem…
    ________________________
    Everything works fine for me until the line “service cman start”

    [root@rhel01~ /]# service cman start
    Starting cluster:
    Loading modules… done
    Mounting configfs… done
    Starting ccsd… done
    Starting cman… failed
    /usr/sbin/cman_tool: ccsd is not running
    [FAILED]
    _______________________

    I have followed ur instruction but couldnt find any solution like….
    I have stop the gfs and gfs2 service.
    But cant find any xml file like /etc/cluster/cluster.conf
    because if we dont create the cluster.conf file, then we cant find it……before starting cman tool there is no step to create the cluster.conf file.
    also “Check that every node are mapped into the file /etc/hosts or resolved by dns system”??
    node you means this 2 lines to be needed to write in /etc/hosts
    “10.10.10.1 rhel-cluster-node1.mgmt.local rhel-cluster-node1
    10.10.10.2 rhel-cluster-node2.mgmt.local rhel-cluster-node2″ only. or edit anyother thing like this.also what is resolved bye dns means?
    how to stop openais service.
    also can you tell what is /etc/hosts file look like.

  • rajib scrive:

    i have the same problem as Eric::
    Thanks for this tutorial!

    Everything works fine for me until the line “service cman start”

    [root@aqsavir21126 /]# service cman start
    Starting cluster:
    Loading modules… done
    Mounting configfs… done
    Starting ccsd… done
    Starting cman… failed
    /usr/sbin/cman_tool: ccsd is not running
    [FAILED]

    I can’t find out what is wrong, any ideas?
    Thanks (OS RHEL 5.5)

    also follow ur instruction but no luck….there is no cluster.conf xml file there….coz in ur previous steps there is no option to create a xml file before cman start command….

  • Hemant Parmar scrive:

    please explain me how to create connection between clustering comp. and iscsi com..

  • Sarath scrive:

    Hello,

    i configured Luci and ricci cluster and IP and data base startup script as resource. Cluster works fine but when the cluster service start database starts in node2 and cluster starts in node1. Could you please help me to resolve this issue.

  • riccardo scrive:

    It was a pleausure !

Lascia un Commento


7 − 5 =

Contacts
FacebookGoogle+AboutMeGmailYouTubeTwitterFlickrLinkedINRSS
Search
Donate
3D Tags
Counter