GlusterFS集群部署
适用于比较大的存储 例如kvm镜像
官方文档
http://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/#purpose-of-this-document
第1章 安装部署(两台都操作)
1.1 更新repo源
|
1
2
|
#yum安装centos-release-gluster#yum安装glusterfs-server |
|
1
|
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo
|
1.2 yum安装
|
1
|
[[email protected] ~]# yum install -y glusterfs-server
|
安装完成后启动
1.3 启动
|
1
2
|
[[email protected] ~]# /etc/init.d/glusterd start
Starting glusterd: [ OK ] |
1.4 gluster形成信任池
|
1
2
|
[[email protected] ~]# gluster peer probe 192.168.80.123 另一个机器的iP
peer probe: success. |
1.4.1 查看信任池
|
1
2
|
[[email protected] ~]# gluster peer status 每个服务器执行都可以
Number of Peers: 1 |
|
1
2
3
|
Hostname: 192.168.80.123Uuid: 49e15d0e-d499-427a-87aa-fe573a7fd345State: Peer in Cluster (Connected)
|
1.5 创建分布式卷
|
1
2
|
192.168.80.123 机器1192.168.80.201 机器2 |
测试
两台服务器分别创建
|
1
2
|
[[email protected] ~]# mkdir /data/exp1 -p 机器1
[[email protected] ~]# mkdir /data/exp2 -p 机器2
|
在机器1操作
|
1
2
3
4
|
[[email protected] yum.repos.d]# gluster volume create test-volume 192.168.80.123:/data/exp1/ 192.168.80.201:/data/exp2
volume create: test-volume: failed: The brick 192.168.80.123:/data/exp1 is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
[[email protected] yum.repos.d]# gluster volume create test-volume 192.168.80.123:/data/exp1/ 192.168.80.201:/data/exp2 force
volume create: test-volume: success: please start the volume to access data
|
分布式卷创建成功
1.5.1 查看分布式卷
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
[[email protected] yum.repos.d]# gluster volume info
Volume Name: test-volume
Type: DistributeVolume ID: 099ad2bc-b83c-4713-9e70-49fc054b5163Status: CreatedSnapshot Count: 0Number of Bricks: 2Transport-type: tcp
Bricks:Brick1: 192.168.80.123:/data/exp1
Brick2: 192.168.80.201:/data/exp2
Options Reconfigured:transport.address-family: inetnfs.disable: on |
1.6 创建复制式卷(类似raid1)
测试
两台服务器分别创建
|
1
2
|
[[email protected] ~]# mkdir /data/exp3 -p 机器1
[[email protected] ~]# mkdir /data/exp4 -p 机器2
|
|
1
2
3
4
5
6
7
|
[[email protected] yum.repos.d]# gluster volume create repl-volume replica 2 transport tcp 192.168.80.123:/data/exp3/ 192.168.80.201:/data/exp4
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: repl-volume: failed: The brick 192.168.80.123:/data/exp3 is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
[[email protected] yum.repos.d]# gluster volume create repl-volume replica 2 transport tcp 192.168.80.123:/data/exp3/ 192.168.80.201:/data/exp4 force
volume create: repl-volume: success: please start the volume to access data |
1.6.1 查看复制式卷
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[[email protected] yum.repos.d]# gluster volume info repl-volume
Volume Name: repl-volumeType: ReplicateVolume ID: 089c6f46-8131-473a-a6e7-c475e2bd5785Status: CreatedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcp
Bricks:Brick1: 192.168.80.123:/data/exp3
Brick2: 192.168.80.201:/data/exp4
Options Reconfigured:transport.address-family: inetnfs.disable: onperformance.client-io-threads: off |
1.7 创建条带卷
测试
两台机器分别操作
|
1
2
|
[[email protected] ~]# mkdir /data/exp5 -p 机器1
[[email protected] ~]# mkdir /data/exp6 -p 机器2
|
创建
|
1
2
|
[[email protected] exp3]# gluster volume create raid0-volume stripe 2 transport tcp 192.168.80.123:/data/exp5/ 192.168.80.201:/data/exp6 force
volume create: raid0-volume: success: please start the volume to access data |
1.7.1 查看raid0券
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
[[email protected] exp3]# gluster volume info raid0-volume
Volume Name: raid0-volumeType: StripeVolume ID: 123ddf8e-9081-44ba-8d9d-0178c05c6a68Status: CreatedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcp
Bricks:Brick1: 192.168.80.123:/data/exp5
Brick2: 192.168.80.201:/data/exp6
Options Reconfigured:transport.address-family: inetnfs.disable: on |
1.8 用券的话要先启动券
查看
|
1
2
3
4
5
6
|
[[email protected] exp3]# gluster volume status
Volume raid0-volume is not started Volume repl-volume is not started Volume test-volume is not started
|
启动
|
1
2
3
4
5
6
|
[[email protected] exp4]# gluster volume start raid0-volume
volume start: raid0-volume: success[[email protected] exp4]# gluster volume start repl-volume
volume start: repl-volume: success[[email protected] exp4]# gluster volume start test-volume
volume start: test-volume: success
|
再查看
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
|
[[email protected] exp4]# gluster volume status
Status of volume: raid0-volumeGluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick 192.168.80.123:/data/exp5 49152 0 Y 43622
Brick 192.168.80.201:/data/exp6 49152 0 Y 43507
Task Status of Volume raid0-volume------------------------------------------------------------------------------There are no active volume tasks Status of volume: repl-volumeGluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick 192.168.80.123:/data/exp3 49153 0 Y 43657
Brick 192.168.80.201:/data/exp4 49153 0 Y 43548
Self-heal Daemon on localhost N/A N/A Y 43569
Self-heal Daemon on 192.168.80.123 N/A N/A Y 43678
Task Status of Volume repl-volume------------------------------------------------------------------------------There are no active volume tasks Status of volume: test-volume
Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick 192.168.80.123:/data/exp1 49154 0 Y 43704
Brick 192.168.80.201:/data/exp2 49154 0 Y 43608
Task Status of Volume test-volume
------------------------------------------------------------------------------There are no active volume tasks |
通过info查看
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
|
[[email protected] exp4]# gluster volume info
Volume Name: raid0-volumeType: StripeVolume ID: 123ddf8e-9081-44ba-8d9d-0178c05c6a68Status: StartedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcp
Bricks:Brick1: 192.168.80.123:/data/exp5
Brick2: 192.168.80.201:/data/exp6
Options Reconfigured:transport.address-family: inetnfs.disable: on Volume Name: repl-volumeType: ReplicateVolume ID: 089c6f46-8131-473a-a6e7-c475e2bd5785Status: StartedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcp
Bricks:Brick1: 192.168.80.123:/data/exp3
Brick2: 192.168.80.201:/data/exp4
Options Reconfigured:transport.address-family: inetnfs.disable: onperformance.client-io-threads: off Volume Name: test-volume
Type: DistributeVolume ID: 099ad2bc-b83c-4713-9e70-49fc054b5163Status: StartedSnapshot Count: 0Number of Bricks: 2Transport-type: tcp
Bricks:Brick1: 192.168.80.123:/data/exp1
Brick2: 192.168.80.201:/data/exp2
Options Reconfigured:transport.address-family: inetnfs.disable: on |
测试挂载
在随便一台服务器挂载(前提需要有glusterfs-client这个服务)
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
[[email protected] exp4]# mount.glusterfs 192.168.80.123:/test-volume /mnt/g1/
[[email protected] exp4]# mount.glusterfs 192.168.80.123:/repl-volume /mnt/g2
[[email protected] exp4]# mount.glusterfs 192.168.80.123:/raid0-volume /mnt/g3
[[email protected] exp4]# df -h|column -t
Filesystem Size Used Avail Use% Mounted on/dev/sda3 18G 5.7G 12G 34% /
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 190M 31M 150M 17% /boot
192.168.80.123:/test-volume
36G 7.8G 27G 23% /mnt/g1
192.168.80.123:/repl-volume
18G 5.7G 12G 34% /mnt/g2
192.168.80.123:/raid0-volume
36G 7.8G 27G 23% /mnt/g3
|
1.9 测试三种卷
分布式卷
随机选择一台服务器写到/data/exp1 或2
复制式卷
两台复制写到/data/exp* 相当于raid1 写两份
调带式卷
1.10 测试分布式+复制
两台服务器都操作
|
1
|
[[email protected] exp3]# mkdir /exp1 /exp2
|
创建逻辑卷(+force强制创建)
[
|
1
2
|
[email protected] ~]# gluster volume create hehe-volume replica 2 transport tcp 192.168.80.123:/exp1/ 192.168.80.123:/exp2/ 192.168.80.201:/exp1/ 192.168.80.201:/exp2/ force
volume create: hehe-volume: success: please start the volume to access data |
启动
|
1
2
|
[[email protected] ~]# gluster volume start hehe-volume
volume start: hehe-volume: success |
查询
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
[[email protected] ~]# gluster volume info hehe-volume
Volume Name: hehe-volumeType: Distributed-ReplicateVolume ID: 321c8da7-43cd-40ad-a187-277018e43c9eStatus: StartedSnapshot Count: 0Number of Bricks: 2 x 2 = 4Transport-type: tcp
Bricks:Brick1: 192.168.80.123:/exp1
Brick2: 192.168.80.123:/exp2
Brick3: 192.168.80.201:/exp1
Brick4: 192.168.80.201:/exp2
Options Reconfigured:transport.address-family: inetnfs.disable: onperformance.client-io-threads: off创建挂载目录并挂载[[email protected] ~]# mkdir /mnt/g5
[[email protected] ~]# mount.glusterfs 192.168.80.123:/hehe-volume /mnt/g5/
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on/dev/sda3 18G 2.2G 15G 13% /
tmpfs 491M 0 491M 0% /dev/shm
/dev/sda1 190M 31M 150M 17% /boot
192.168.80.123:/hehe-volume
18G 3.9G 14G 23% /mnt/g5
|
测试成果
|
1
2
3
4
|
[[email protected] ~]# man tcp > /mnt/g5/tcp1.txt
[[email protected] ~]# man tcp > /mnt/g5/tcp2.txt
[[email protected] ~]# man tcp > /mnt/g5/tcpe.txt
[[email protected] ~]# man tcp > /mnt/g5/tcp4.txt
|
机器1
|
1
2
3
4
5
6
7
8
9
|
[[email protected] ~]# tree /exp*
/exp1├── tcp2.txt├── tcp4.txt└── tcpe.txt/exp2├── tcp2.txt├── tcp4.txt└── tcpe.txt |
机器2
|
1
2
3
4
5
6
7
|
这样的分布不均匀 所以这些目录里的文件创建的时候跟逻辑卷的顺序有关系
我们再来测试创建不同顺序的复制卷
|
1
2
|
[[email protected] ~]# gluster volume create hehehe-volume replica 2 transport tcp 192.168.80.123:/exp3/ 192.168.80.201:/exp3/ 192.168.80.123:/exp4/ 192.168.80.201:/exp4/ force
volume create: hehehe-volume: success: please start the volume to access data |
启动
|
1
2
|
[[email protected] ~]# gluster volume start hehehe-volume
volume start: hehehe-volume: success |
查看状态
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
[[email protected] ~]# gluster volume info hehehe-volume
Volume Name: hehehe-volumeType: Distributed-ReplicateVolume ID: 2f24e2cf-bb86-4fe8-a2bc-23f3d07f6f86Status: StartedSnapshot Count: 0Number of Bricks: 2 x 2 = 4Transport-type: tcp
Bricks:Brick1: 192.168.80.123:/exp3
Brick2: 192.168.80.201:/exp3
Brick3: 192.168.80.123:/exp4
Brick4: 192.168.80.201:/exp4
Options Reconfigured:transport.address-family: inetnfs.disable: onperformance.client-io-threads: off |
挂载
|
1
2
|
[[email protected] ~]# mkdir /mnt/gg
[[email protected] ~]# mount.glusterfs 192.168.80.123:/hehehe-volume /mnt/gg
|
测试写入文件
|
1
2
3
4
|
[[email protected] gg]# man tcp > /mnt/gg/tcp1.txt
[[email protected] gg]# man tcp > /mnt/gg/tcp2.txt
[[email protected] gg]# man tcp > /mnt/gg/tcp3.txt
[[email protected] gg]# man tcp > /mnt/gg/tcp4.txt
|
机器1查看
|
1
2
3
4
5
6
7
8
|
[[email protected] gg]# ll /exp3
total 168-rw-r--r-- 2 root root 51310 Oct 20 11:02 tcp2.txt-rw-r--r-- 2 root root 51310 Oct 20 11:02 tcp3.txt-rw-r--r-- 2 root root 51310 Oct 20 11:02 tcp4.txt[[email protected] gg]# ll /exp4
total 56-rw-r--r-- 2 root root 51310 Oct 20 11:02 tcp1.txt |
机器2查看
|
1
2
3
4
5
6
7
8
|
[[email protected] ~]# ll /exp3/total 168
-rw-r--r-- 2 root root 51310 Dec 25 18:05 tcp2.txt
-rw-r--r-- 2 root root 51310 Dec 25 18:05 tcp3.txt
-rw-r--r-- 2 root root 51310 Dec 25 18:05 tcp4.txt
[[email protected] ~]# ll /exp4/total 56
-rw-r--r-- 2 root root 51310 Dec 25 18:05 tcp1.txt
|
这样分布就均匀了 数据库也分布复制成功
1.11 扩展卷测试
|
1
2
3
|
[[email protected] ~]# mkdir /data/exp9
[[email protected] ~]# gluster volume add-brick test-volume 192.168.80.201:/data/exp9/ force 添加的是已经存在的卷
volume add-brick: success |
查看状态
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[[email protected] ~]# gluster volume info test-volume
Volume Name: test-volume
Type: DistributeVolume ID: 099ad2bc-b83c-4713-9e70-49fc054b5163Status: StartedSnapshot Count: 0Number of Bricks: 3Transport-type: tcp
Bricks:Brick1: 192.168.80.123:/data/exp1
Brick2: 192.168.80.201:/data/exp2
Brick3: 192.168.80.201:/data/exp9 新增加的卷
Options Reconfigured:transport.address-family: inetnfs.disable: on |
重新平衡一下分布式券
|
1
2
3
|
[[email protected] g1]# gluster volume rebalance test-volume start
volume rebalance: test-volume: success: Rebalance on test-volume has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: aa05486b-11df-4bac-9ac7-2237a8c12ad6 |
1.11.1 删除测试卷
删除brick 数据会丢失
|
1
2
3
|
[[email protected] gg]# gluster volume remove-brick test-volume 192.168.80.201:/data/exp9 start
volume remove-brick start: successID: 4f16428a-7e9f-4b7b-bb07-2917a2f14323 |
再次查看
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
[[email protected] g1]# gluster volume status test-volume
Status of volume: test-volume
Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick 192.168.80.123:/data/exp1 49158 0 Y 1237
Brick 192.168.80.201:/data/exp2 49154 0 Y 43608
Brick 192.168.80.201:/data/exp9 49159 0 Y 44717
Task Status of Volume test-volume
------------------------------------------------------------------------------Task : Remove brick ID : 4f16428a-7e9f-4b7b-bb07-2917a2f14323Removed bricks: 192.168.80.201:/data/exp9
Status : completed |
再次删除
|
1
2
3
|
[[email protected] g1]# gluster volume remove-brick test-volume 192.168.80.201:/data/exp9 force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success |
再次查看状态
|
1
2
3
4
5
6
7
8
9
10
|
[[email protected] g1]# gluster volume status test-volume
Status of volume: test-volume
Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick 192.168.80.123:/data/exp1 49158 0 Y 1237
Brick 192.168.80.201:/data/exp2 49154 0 Y 43608
Task Status of Volume test-volume
------------------------------------------------------------------------------There are no active volume tasks |
删除卷之后再平衡一下
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
[[email protected] gg]# gluster volume rebalance test-volume start
volume rebalance: test-volume: success: Rebalance on test-volume has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 747d499c-8f20-4514-b3af-29d93ce3a995 [[email protected] gg]# gluster volume info test-volume
Volume Name: test-volume
Type: DistributeVolume ID: 099ad2bc-b83c-4713-9e70-49fc054b5163Status: StartedSnapshot Count: 0Number of Bricks: 2Transport-type: tcp
Bricks:Brick1: 192.168.80.123:/data/exp1
Brick2: 192.168.80.201:/data/exp2
Options Reconfigured:performance.client-io-threads: onnfs.disable: ontransport.address-family: inet |
本文转自 蓝叶子Sheep 51CTO博客,原文链接:http://blog.51cto.com/dellinger/2054693,如需转载请自行联系原作者