RHEL6-红帽HARHCSricci+luci+fence

1.整体架构:

红帽HA(RHCS)ricci+luci+fence

红帽HA(RHCS)ricci+luci+fence

红帽HA(RHCS)ricci+luci+fence

2实验环境:

luci管理机:192.168.122.1

ricci节点:192.168.122.34

192.168.122.33

192.168.122.82

yum仓库:

[rhel-source]

name=RedHat Enterprise Linux $releasever - $basearch - Source

baseurl=ftp://192.168.122.1/pub/rhel6.3

gpgcheck=0


[HighAvailability]

name=InstructorServer Repository

baseurl=ftp://192.168.122.1/pub/rhel6.3/HighAvailability

gpgcheck=0


[LoadBalancer]

name=InstructorServer Repository

baseurl=ftp://192.168.122.1/pub/rhel6.3/LoadBalancer

gpgcheck=0


[ResilientStorage]

name=InstructorServer Repository

baseurl=ftp://192.168.122.1/pub/rhel6.3/ResilientStorage

gpgcheck=0


[ScalableFileSystem]

name=InstructorServer Repository

baseurl=ftp://192.168.122.1/pub/rhel6.3/ScalableFileSystem

gpgcheck=0


要将红色部分加入yum'仓库;


说明:

不支持在集群节点中使用NetworkManager。如果您已经在集群节点中安装了NetworkManager,您应该 删除或者禁用该程序。


2.环境配置:

以下的操作在所有的HA节点上进行:

在所有的ha节点上安装ricci 在客户端(要有web浏览器)上安装luci

# yum-y install ricci

[[email protected]~]# chkconfig ricci on

[[email protected]~]# /etc/init.d/ricci start

[[email protected]~]# passwd ricci #一定要有否则将认证失败:

启动luci

[[email protected]]# /etc/init.d/luci start

Pointyour web browser to https://localhost.localdomain:8084(or equivalent) to access luci

网页访问:

https://localhost.localdomain:8084

并使用root登录

红帽HA(RHCS)ricci+luci+fence



创建集群:

红帽HA(RHCS)ricci+luci+fence

此时luci管理端正在为ricciHA节点上自动安装所需要的包:

红帽HA(RHCS)ricci+luci+fence

HA节点上可以看见有yum的进程正在运行:

红帽HA(RHCS)ricci+luci+fence

完成后:


红帽HA(RHCS)ricci+luci+fence

3.fence设备配置:

采用虚拟机fence设备:虚拟机与主机名的对应关系:

hostnameipkvmdomain name

desk34192.168.122.34ha1

desk33192.168.122.33ha2

desk82192.168.122.82desk82


luci主机上:

[[email protected]]# yum -y install fence-virt fence-virtd fence-virtd-libvirtfence-virtd-multicast

[[email protected]]# fence_virtd -c

Modulesearch path [/usr/lib64/fence-virt]:

Availablebackends:

libvirt 0.1

Availablelisteners:

multicast 1.1


Listenermodules are responsible for accepting requests

fromfencing clients.


Listenermodule [multicast]:


Themulticast listener module is designed for use environments

wherethe guests and hosts may communicate over a network using

multicast.


Themulticast address is the address that a client will use to

sendfencing requests to fence_virtd.

MulticastIP Address [225.0.0.12]:


Usingipv4 as family.


MulticastIP Port [1229]:


Settinga preferred interface causes fence_virtd to listen only

on thatinterface. Normally, it listens on the default network

interface. In environments where the virtual machines are

usingthe host machine as a gateway, this *must* be set

(typicallyto virbr0).

Set to'none' for no interface.


Interface[none]: virbr0 #根据主机网卡配置而定或是br0

如果虚拟机与真机之间使用NAT选择virbr0,使用的桥接选择br0


The keyfile is the shared key information which is used to

authenticatefencing requests. The contents of this file must

bedistributed to each physical host and virtual machine within

acluster.


KeyFile [/etc/cluster/fence_xvm.key]:


Backendmodules are responsible for routing requests to

theappropriate hypervisor or management layer.


Backendmodule [checkpoint]: libvirt

Thelibvirt backend module is designed for single desktops or

servers. Do not use in environments where virtual machines

may bemigrated between hosts.


LibvirtURI [qemu:///system]:


Configurationcomplete.


===Begin Configuration ===

fence_virtd{

listener= "multicast";

backend= "libvirt";

module_path= "/usr/lib64/fence-virt";

}

listeners{

multicast{

key_file= "/etc/cluster/fence_xvm.key";

address= "225.0.0.12";

family= "ipv4";

port= "1229";

interface= "virbr0";

}


}

backends{

libvirt{

uri ="qemu:///system";

}


}



=== EndConfiguration ===

Replace/etc/fence_virt.conf with the above [y/N]? y

luci主机上的cluster配置文件位置:/etc/fence_virt.conf


[[email protected]]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128count=1


[[email protected]]# scp /etc/cluster/fence_xvm.key desk33:/etc/cluster/

[[email protected]]# scp /etc/cluster/fence_xvm.key desk34:/etc/cluster/

[[email protected]]# scp /etc/cluster/fence_xvm.key desk82:/etc/cluster/


[[email protected]]# /etc/init.d/fence_virtd start

[[email protected]]# netstat -anplu | grep fence

udp 0 0 0.0.0.0:1229 0.0.0.0:* 10601/fence_virtd

[[email protected]]# fence_xvm -H vm4 -o reboot #检查fence是否可以控制虚拟机如果可以,对应的虚拟机将会重启;

添加fence设备:

红帽HA(RHCS)ricci+luci+fence

在网页上所做的一切操作都会写在各个节点的/etc/cluster/cluster.conf中:

[[email protected]]# cat cluster.conf

<?xmlversion="1.0"?>

<clusterconfig_version="2" name="wangzi_1">

<clusternodes>

<clusternodename="192.168.122.34" nodeid="1"/>

<clusternodename="192.168.122.33" nodeid="2"/>

<clusternodename="192.168.122.82" nodeid="3"/>

</clusternodes>

<fencedevices>

<fencedeviceagent="fence_xvm" name="vmfence"/>

</fencedevices>

</cluster>

在个Node上:

红帽HA(RHCS)ricci+luci+fence

红帽HA(RHCS)ricci+luci+fence

添加错误域:

红帽HA(RHCS)ricci+luci+fence

“Priority”为优先级,越小优先级越高:

NoFailback 为服务不回切(默认为回切)


添加资源:

红帽HA(RHCS)ricci+luci+fence

红帽HA(RHCS)ricci+luci+fence

ip为一个虚拟的浮动ip,用于外界访问。将会浮动出现在后端提供服务的HA节点上;

最后一行的数字越小,浮动ip切换的速度越快;

httpd服务必须是自己在HA节点上提前安装,但不要启动


添加服务组:

红帽HA(RHCS)ricci+luci+fence

在服务组apsche下添加资源刚才添加的资源ipadress httpd

红帽HA(RHCS)ricci+luci+fence

可看见集群已经自动将192.168.122.34httpd启动了。

[[email protected]~]# /etc/init.d/httpd status

httpd(pid 14453) is running...


测试:

[[email protected]~]# clustat

ClusterStatus for wangzi_1 @ Sat Sep 7 02:52:18 2013

MemberStatus: Quorate

MemberName

ID Status

---------- ---- ------

192.168.122.34 1 Online, Local,rgmanager

192.168.122.33 2 Online, rgmanager

192.168.122.82 3 Online, rgmanager


Service Name Owner (Last) State

------- ---- ----- ------ -----

service:apsche 192.168.122.34 started

1.)关闭httpd服务

[[email protected]~]# /etc/init.d/httpd stop


红帽HA(RHCS)ricci+luci+fence

红帽HA(RHCS)ricci+luci+fence

desk33上将会出现192.168.122.122的浮动ip

[root[email protected]~]# ip addr show

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscpfifo_fast state UP qlen 1000

link/ether 52:54:00:d0:fe:21 brd ff:ff:ff:ff:ff:ff

inet 192.168.122.33/24 brd 192.168.122.255 scope global eth0

inet 192.168.122.122/24 scope global secondaryeth0

如果再将desk33上的httpd关掉,将会切换到desk82上;

如果将desk34httpd启动,浮动将会回到desk34上,因为desk34的优先级最高,并且设置了服务回切

2)断网模拟:

[[email protected]~]# ifconfig eth0 down

desk34会重启服务将会切换到desk33上;

desk34重启完毕后,服务会回切到desk34上。

3)内核崩溃:

[[email protected]~]# echo c > /proc/sysrq-trigger

红帽HA(RHCS)ricci+luci+fence

红帽HA(RHCS)ricci+luci+fence

主机重启,服务切换到desk33


西安石油大学

王兹银

[email protected]


转载于:https://blog.51cto.com/wangziyin/1303239

相关文章: