Monday, October 24, 2016

How to test pacemaker to create a Virtual IP for LB purpose for 3 nodes

sometimes, you need have more than one LBs for HA purpose.  and we always want to have one floatip virual IP cross all LB nodes. and we can use pacepaker/corosyncd to do this very easily.  for the testing purpose, I will use Vagrant to create 3 nodes (node1, node2, node3). once booted up , all 3 machine are well figured for hostname.

nodeone= 'node1'
nodetwo= 'node2'
nodethree= 'node3'

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
####  node 1###########
                 config.vm.define :"node1" do |node1|
                        node1.vm.box = BOX
                        node1.vm.box_url = BOX_URL
                        node1.vm.network :private_network, ip: "192.168.2.101"
                        node1.vm.hostname = nodeone
                        node1.vm.synced_folder ".", "/vagrant", disabled: true
               node1.ssh.shell = "bash -c 'BASH_ENV=/etc/profile exec bash'"
                        node1.vm.provision "shell", path: "post-deploy.sh" ,run: "always"
                        node1.vm.provider "virtualbox" do |v|
                                v.customize ["modifyvm", :id, "--memory", "1300"]
                                v.name = nodeone
                    v.gui = true
                        end
                end

and post-deploy.sh
#!/bin/bash
value=$( grep -ic "entry" /etc/hosts )
if [ $value -eq 0 ]
then
echo "
################ hosts entry ############
192.168.2.101 node1
192.168.2.102 node2
192.168.2.103 node3
######################################################
" >> /etc/hosts
fi

once done, you can call vagrant up node1 node2 node3 to have 3 VMs ready.

now vagran ssh all nodes  to install the dependencies.

yum install -y pcsd pacemaker corosync

after done, enable all daemon to start automatically.
systemctl enable pcsd
systemctl enable pacemaker
systemctl enable corosync
//disable firewalld
systemctl disable firewalld

Now time to configure the corosync for nodes topology.

you can just copy the /etc/corosync/corosync.conf.example to /etc/corosync/corosync.conf
just change your ip and quorum settings (by dfault it use mcast to maintain the cluster membership)


then create a user named haclusteruser among 3 nodes

echo you_special_compliceted_password|passwd --stdin haclusteruser

once done, you can call pcs cluster auth to authorize eacy host using the shared credential
pcs cluster auth node1

after this. start the corosync and pacemaker on all nodes.

run crm_mon to see all the nodes

now we can create a floating vip to HA purpose.
pcs  resource create VIP2 IPaddr2 ip=192.168.2.100 cidr_netmask=32 nic=enp0s8 op monitor interval=30s

after that, you might see the status is always stopped. we need to disable the stonith
pcs  property set stonith-enabled=false

try the check again for the status, the VIP is bound to one node (node1 here)



to test the HA, shutdown node1. and run crm_mon again, the VIP was assigned on node2

 
Locations of visitors to this page