做Ceph实验需要重新创建三个Ceph环境用到的虚拟机:(以下实验在node1,node2,node3节点做)

虚拟机初始化搭架参考:Openstack Queens版本双节点架构笔记1,虚拟机环境安装: - 林派文的博客 - CSDN博客 https://blog.csdn.net/qq_38387984/article/details/83245908

网络节点:172.31.31.140  node1

172.31.31.138  node2

172.31.31.137  node3

Openstack Queens版本双节点架构笔记9,Ceph安装1:

以下配置在三个节点都配置:

安装 NTP(node 1/2/3节点)

在所有 Ceph 节点上安装 NTP 服务(特别是 Ceph Monitor 节点),以免因时钟漂移导致故障

yum install ntp ntpdate  -y

/usr/sbin/ntpdate ntp6.aliyun.com

echo "*/3 * * * * /usr/sbin/ntpdate ntp6.aliyun.com  &> /dev/null" > /tmp/crontab

crontab /tmp/crontab

安装ceph (node 1/2/3节点)

yum install -y   libibverbs-utils lttng-tools libbabeltrace  fuse-libs  leveldb gperftools-libs  python-prettytable  python-requests selinux-policy  cryptsetup psmisc  python-setuptools  gdisk  python-flask  pyOpenSSL  python-cherrypy python-pecan mailcap

yum install librados2 -y

yum install -y python-rados librbd1 python-rbd libcephfs2 python-cephfs librgw2 librados libradosstriper1 libradosstriper-devel python-rgw ceph-common ceph-selinux ceph-base ceph-osd ceph-mon ceph-mds ceph-mgr ceph ceph-radosgw

配置各个节点hosts文件(node 1/2/3节点)

[[email protected] ~]# cat /etc/hosts

172.31.31.140  node1

172.31.31.138  node2

172.31.31.137  node3

设置每个节点的主机名(node 1/2/3节点)

[[email protected] ~]# hostnamectl set-hostname node1

[[email protected] ~]# hostnamectl set-hostname node2

[[email protected] ~]# hostnamectl set-hostname node3

配置各个节点防火墙(node 1/2/3节点)

Ceph Monitors 之间默认使用 6789 端口通信, OSD 之间默认用 6800:7300 这个范围内的端口通信

firewall-cmd --zone=public --add-port=6789/tcp --permanent

firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent

firewall-cmd --reload

实验环境可以:

systemctl stop firewalld

systemctl disable firewalld

sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

setenforce 0

SSH无密码登录(node 1节点)

[[email protected] ~]#ssh-******

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:lrcMtzRJYVl4DOojTZK8bNNmInAiF/5qC9lHCVigoI0 [email protected]

The key's randomart image is:

+---[RSA 2048]----+

|o.o       +*.    |

|+* . . . oo.o    |

|E B . + o ..     |

| o * o B o .     |

|    = B S *      |

| o o o B B +     |

|o + .     +      |

| o o             |

|  .              |

+----[SHA256]-----+

拷贝key 到各节点(node 1节点)

[[email protected]~]# ssh-copy-id  node1

[[email protected]~]# ssh-copy-id  node2

[[email protected]~]# ssh-copy-id  node3

创建目录

在执行ceph-deploy 的过程中会生成一些配置文件,建议创建一个目录,例如

mk-ceph-cluster。

[[email protected] ~]# mkdir /tmp/mk-ceph-cluster 

[[email protected] ~]# cd /tmp/mk-ceph-cluster

(以下操作都在/tmp/mk-ceph-cluster)

ceph.conf(node 1节点)

配置Ceph.conf配置文件,示例文件是默认的,可以根据自己情况进行相应调整和添加。

[global]

fsid = 4ae8397d-810b-496c-9619-9b0eaff1dd59

mon_initial_members = node1

mon_host = 172.31.31.140

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

filestore_xattr_use_omap = true

生成秘钥key(node 1节点)

    生成秘钥key

ceph-authtool --create-keyring ceph.keyring --gen-key -n mon. --cap mon 'allow *' 

ceph-authtool --create-keyring ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' 

ceph-authtool --create-keyring ceph.client.bootstrap-osd.keyring --gen-key -n client.bootstrap-osd --cap mon 'allow profile bootstrap-osd' 

ceph-authtool --create-keyring ceph.mgr.node1.keyring --gen-key -n mgr.node1 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 

ceph-authtool --create-keyring ceph.mgr.node2.keyring --gen-key -n mgr.node2 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 

ceph-authtool --create-keyring ceph.mgr.node3.keyring --gen-key -n mgr.node3 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 

   

引入秘钥key(node 1节点)

ceph-authtool ceph.keyring  --import-keyring ceph.client.admin.keyring 

ceph-authtool ceph.keyring  --import-keyring ceph.client.bootstrap-osd.keyring 

ceph-authtool ceph.keyring  --import-keyring ceph.mgr.node1.keyring 

ceph-authtool ceph.keyring  --import-keyring ceph.mgr.node2.keyring 

ceph-authtool ceph.keyring  --import-keyring ceph.mgr.node3.keyring

   

分发配置(node 1节点)

monmaptool --create --add node1 172.31.31.140:6789   --fsid 4ae8397d-810b-496c-9619-9b0eaff1dd59 monmap

cp ceph.client.admin.keyring ceph.client.bootstrap-osd.keyring ceph.keyring  ceph.conf monmap /etc/ceph 

scp ceph.client.admin.keyring ceph.client.bootstrap-osd.keyring ceph.keyring  ceph.conf monmap node2:/etc/ceph 

scp ceph.client.admin.keyring ceph.client.bootstrap-osd.keyring ceph.keyring  ceph.conf monmap node3:/etc/ceph 

   

    初始化监控(node 1节点)

 

   

mkdir /var/lib/ceph/mon/ceph-mon.node1

ceph-mon --cluster ceph --mkfs -i node1 --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.keyring  

 

touch /var/lib/ceph/mon/ceph-mon.node1/done

systemctl start [email protected]

chown -R ceph:ceph /var/lib/ceph/

systemctl reset-failed [email protected]

systemctl restart [email protected]

systemctl status [email protected]

Openstack Queens版本双节点架构笔记9,Ceph安装1:

(排错文档)

初始化磁盘(node 1节点)

ceph-disk zap /dev/sdb

ssh node2 ceph-disk zap /dev/sdb

ssh node3 ceph-disk zap /dev/sdb

准备OSD(node 1节点)

 ceph-disk prepare --cluster ceph --cluster-uuid 4ae8397d-810b-496c-9619-9b0eaff1dd59 /dev/sdb

ssh node2  ceph-disk prepare --cluster ceph --cluster-uuid 4ae8397d-810b-496c-9619-9b0eaff1dd59 /dev/sdb

ssh node3  ceph-disk prepare --cluster ceph --cluster-uuid 4ae8397d-810b-496c-9619-9b0eaff1dd59 /dev/sdb

**OSD(node 1节点)

ceph-disk activate /dev/sdb1 --activate-key /etc/ceph/ceph.client.bootstrap-osd.keyring

ssh node2 ceph-disk activate /dev/sdb1 --activate-key /etc/ceph/ceph.client.bootstrap-osd.keyring

ssh node3 ceph-disk activate /dev/sdb1 --activate-key /etc/ceph/ceph.client.bootstrap-osd.keyring

创建mgr(node 1节点)

mkdir -p /var/lib/ceph/mgr/ceph-node1

ssh node2 mkdir -p /var/lib/ceph/mgr/ceph-node2

ssh node3 mkdir -p /var/lib/ceph/mgr/ceph-node3

#scp keyring

cp ceph.mgr.node1.keyring /var/lib/ceph/mgr/ceph-node1/keyring

scp ceph.mgr.node2.keyring node2:/var/lib/ceph/mgr/ceph-node2/keyring

scp ceph.mgr.node3.keyring node3:/var/lib/ceph/mgr/ceph-node3/keyring

启动mgr(权限ceph:ceph三个节点都要授予,按照提示重启服务)(node 1节点)

chown -R ceph:ceph /var/lib/ceph/

systemctl start [email protected]

systemctl reset-failed [email protected]

systemctl start [email protected]

systemctl  status [email protected]

node2:

  chown -R ceph:ceph /var/lib/ceph/

  systemctl start [email protected]

  systemctl reset-failed [email protected]

  systemctl start [email protected]

  systemctl  status [email protected]

node3:

  chown -R ceph:ceph /var/lib/ceph/

  systemctl start [email protected]

  systemctl reset-failed [email protected]

  systemctl start [email protected]

  systemctl  status [email protected]

查看集群健康状态(node 1节点)

如果是active+clean状态就是正常的

[email protected]:~# ceph health

开启dashboard(node 1节点)

  ceph config-key put mgr/dashboard/server_addr 172.31.31.140 --cluster=ceph

  ceph config-key put mgr/dashboard/server_port 7000 --cluster=ceph

  ceph mgr module enable dashboard

打开浏览器输入:172.31.31.140:7000

验证是否安装成功!

Openstack Queens版本双节点架构笔记1,虚拟机环境安装: https://blog.csdn.net/qq_38387984/article/details/83245908 

Openstack Queens版本双节点架构笔记2,Openstack环境安装: https://blog.csdn.net/qq_38387984/article/details/83245941

Openstack Queens版本双节点架构笔记3,Keystone安装:https://blog.csdn.net/qq_38387984/article/details/83274421

Openstack Queens版本双节点架构笔记4,Glance安装:https://blog.csdn.net/qq_38387984/article/details/83274547

Openstack Queens版本双节点架构笔记5,Nova安装:https://blog.csdn.net/qq_38387984/article/details/83274567

Openstack Queens版本双节点架构笔记6,Neutron安装:https://blog.csdn.net/qq_38387984/article/details/83274578

Openstack Queens版本双节点架构笔记7,Dashboard安装:https://blog.csdn.net/qq_38387984/article/details/83274601

Openstack Queens版本双节点架构笔记8,验证Databoard实例 https://blog.csdn.net/qq_38387984/article/details/83502979

Openstack Queens版本双节点架构笔记9,Ceph安装1:https://blog.csdn.net/qq_38387984/article/details/83502996

Openstack Queens版本双节点架构笔记10,Ceph安装2:https://blog.csdn.net/qq_38387984/article/details/83503016

Openstack Queens版本双节点架构笔记11,Ceph安装3:https://blog.csdn.net/qq_38387984/article/details/83503033

 

相关文章:

  • 2022-12-23
  • 2022-12-23
  • 2022-12-23
  • 2022-12-23
  • 2022-12-23
  • 2021-04-15
  • 2021-09-25
  • 2022-12-23
猜你喜欢
  • 2022-01-19
  • 2022-12-23
  • 2022-12-23
  • 2021-08-31
  • 2022-12-23
  • 2021-12-04
  • 2022-12-23
相关资源
相似解决方案