【Linux存储系列教程】ceph-mimic集群部署

上一期教程:ceph的架构和原理
下一期教程:ceph存储的使用

一、实验准备

1.规划主机

系统版本:Centos7.9
IP192.168.140.10 ==主机名:==ceph-node1|ceph==集群节点==和ceph-deploy ==硬盘:==/dev/sdb
IP192.168.140.11 ==主机名:==ceph-node2|ceph==集群节点==和ceph-deploy ==硬盘:==/dev/sdb
IP192.168.140.12 ==主机名:==ceph-node3|ceph==集群节点==和ceph-deploy ==硬盘:==/dev/sdb
IP192.168.140.13 ==主机名:==ceph-client|==业务服务器==

  • 修改主机名使用hostnamectl set-hostname 主机名

2.所有主机关闭防火墙和SElinux、配置时间同步(重要)

关闭防火墙SElinux
先安装ntpdate命令,使用yum install -y ntpdate安装
设置时间同步:ntpdate 120.25.115.20

设置同步时间计划任务

关于计划任务教程:https://www.wsjj.top/archives/57

[root@ceph-node1 ~]# crontab  -e
*/30 * * * * /usr/sbin/ntpdate 120.25.115.20 &> /dev/null

[root@ceph-node1 ~]# crontab  -l
*/30 * * * * /usr/sbin/ntpdate 120.25.115.20 &> /dev/null

3.配置免密SSH(重要)

ceph-node1节点操作

[root@ceph-node1 ~]# ssh-keygen -t rsa
[root@ceph-node1 ~]# mv /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys

写个循环,把秘钥文件拷给其他机器

[root@ceph-node1 ~]# for i in 11 12 13 14
> do
> scp -r /root/.ssh/ root@192.168.140.$i:/root/
> done

4.配置主机名解析(重要)

[root@ceph-node1 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.140.10 ceph-node1.linux.com ceph-node1
192.168.140.11 ceph-node2.linux.com ceph-node2
192.168.140.12 ceph-node3.linux.com ceph-node3
192.168.140.13 ceph-client.linux.com ceph-client

把hosts文件拷贝给其他机器上

[root@ceph-node1 ~]# for i in 11 12 13
> do
> scp -r /etc/hosts root@192.168.140.$i:/etc/
> done
hosts                                                   100%  349   541.8KB/s   00:00    
hosts                                                   100%  349   258.4KB/s   00:00    
hosts                                                   100%  349   286.0KB/s   00:00

二、环境准备

1.所有主机替换默认的base源为国内,配置epel源(重要)

[root@ceph-node01 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@ceph-node01 ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

2.配置ceph软件仓库(重要)

[root@ceph-node1 ~]# vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/x86_64/
enabled=1
gpgcheck=0
priority=1
 
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=0
priority=1
 
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=0
priority=1

将软件源拷贝给其他机器(重要)

[root@ceph-node1 ~]# for i in 11 12 13
> do
> scp -r /etc/yum.repos.d/*.repo root@192.168.140.$i:/etc/yum.repos.d/
> done

清理旧缓存,生成新的缓存

[root@ceph-node1 ~]# for i in 10 11 12 13
> do
> ssh root@192.168.140.$i yum clean all && yum makecache fast
> done

更新系统至最新版(重要)

[root@ceph-node01 ~]# yum update -y
[root@ceph-node02 ~]# yum update -y
[root@ceph-node03 ~]# yum update -y
[root@ceph-client ~]# yum update -y

更新完重启系统

[root@ceph-node1 ~]# reboot
[root@ceph-node2 ~]# reboot
[root@ceph-node3 ~]# reboot
[root@ceph-client ~]# reboot

3.三台node节点主机新增一块硬盘

添加硬盘期间,虚拟机应保持关机

ceph09

三、在ceph-node1节点安装ceph-deploy自动化工具

ceph-deploy是一个自动化工具,可以快速帮助我们安装ceph

1.安装ceph-deploy工具

[root@ceph-node1 ~]# yum install -y ceph-deploy

2.创建用到的目录

[root@ceph-node1 ~]# mkdir /etc/ceph
[root@ceph-node1 ~]# cd /etc/ceph
[root@ceph-node1 ceph]# 

3.创建ceph集群

命令格式:ceph-deploy new 主机名

[root@ceph-node1 ceph]# ceph-deploy new ceph-node1

Traceback (most recent call last):
  File "/usr/bin/ceph-deploy", line 18, in <module>
    from ceph_deploy.cli import main
  File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in <module>
    import pkg_resources
ImportError: No module named pkg_resources		#可以看到这里报错了

我们缺少Python里一个叫distribute的模块,使用pip命令安装即可!

[root@ceph-node1 ceph]# pip install distribute
-bash: pip: 未找到命令

提示我们未找到命令,安装python-pip即可解决

[root@ceph-node1 ceph]# yum install -y python-pip

安装完pip命令后,返回安装distribute模块

[root@ceph-node1 ceph]# pip install distribute

安装完模块后,即可创建ceph集群

[root@ceph-node1 ceph]# ceph-deploy new ceph-node1
#以下是部分提示信息
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new ceph-node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7fae31615ed8>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fae30d916c8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-node1']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: /usr/sbin/ip link show
[ceph-node1][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-node1][DEBUG ] IP addresses found: [u'192.168.140.10']
[ceph_deploy.new][DEBUG ] Resolving host ceph-node1
[ceph_deploy.new][DEBUG ] Monitor ceph-node1 at 192.168.140.10
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-node1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.140.10']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

集群创建完后,会在当前目录下生成几个文件

[root@ceph-node1 ceph]# ls /etc/ceph/
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring
  • ceph.conf集群配置文件
  • ceph.mon.keyring
    • ceph monitor认证的令牌
  • ceph-deploy-ceph.log
    • ceph-deploy日志

4.所有ceph-node节点安装ceph相关软件

[root@ceph-node1 ceph]# for i in 10 11 12
> do
> ssh root@192.168.140.$i yum install -y ceph ceph-radosgw
> ssh root@192.168.140.$i ceph -v
> done

上面的命令和下面的命令,二选其一即可!

也可以使用ceph-deploy自动化工具安装

[root@ceph-node1 ceph]# ceph-deploy install ceph-node1 ceph-node2 ceph-node3

四、在reph-client节点安装ceph-common客户端

[root@ceph-client ~]# yum install -y ceph-common

五、创建ceph monitor

1.编辑ceph-node1上的配置文件

配置文件路径:/etc/ceph/ceph.conf

[root@ceph-node1 ceph]# vim /etc/ceph/ceph.conf
[global]
fsid = bf6cea08-aaf9-4f2c-9316-f1d1a66fcbc1
mon_initial_members = ceph-node1
mon_host = 192.168.140.10
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx	#cephx是内部协议
public network = 192.168.140.0/24	#添加此段内容,定义ceph运行在那个网段上

2.monitor初始化,将ceph-node1配置为monitor

[root@ceph-node1 ceph]# ceph-deploy mon create-initial
#以下是部分提示信息
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc7c8687320>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7fc7c86d7500>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-node1
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-node1 ...
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.9.2009 Core
[ceph-node1][DEBUG ] determining if provided host has same hostname in remote
[ceph-node1][DEBUG ] get remote short hostname
[ceph-node1][DEBUG ] deploying mon to ceph-node1
[ceph-node1][DEBUG ] get remote short hostname
[ceph-node1][DEBUG ] remote hostname: ceph-node1
[ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node1][DEBUG ] create the mon path if it does not exist
[ceph-node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-node1/done
[ceph-node1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-node1/done
[ceph-node1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-node1.mon.keyring
[ceph-node1][DEBUG ] create the monitor keyring file
[ceph-node1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i ceph-node1 --keyring /var/lib/ceph/tmp/ceph-ceph-node1.mon.keyring --setuser 167 --setgroup 167
[ceph-node1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-node1.mon.keyring
[ceph-node1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-node1][DEBUG ] create the init path if it does not exist
[ceph-node1][INFO  ] Running command: systemctl enable ceph.target
[ceph-node1][INFO  ] Running command: systemctl enable ceph-mon@ceph-node1
[ceph-node1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph-node1.service to /usr/lib/systemd/system/ceph-mon@.service.
[ceph-node1][INFO  ] Running command: systemctl start ceph-mon@ceph-node1
[ceph-node1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node1.asok mon_status
[ceph-node1][DEBUG ] ********************************************************************************
[ceph-node1][DEBUG ] status for monitor: mon.ceph-node1
[ceph-node1][DEBUG ] {
[ceph-node1][DEBUG ]   "election_epoch": 3, 
[ceph-node1][DEBUG ]   "extra_probe_peers": [], 
[ceph-node1][DEBUG ]   "feature_map": {
[ceph-node1][DEBUG ]     "mon": [
[ceph-node1][DEBUG ]       {
[ceph-node1][DEBUG ]         "features": "0x3ffddff8ffacfffb", 
[ceph-node1][DEBUG ]         "num": 1, 
[ceph-node1][DEBUG ]         "release": "luminous"
[ceph-node1][DEBUG ]       }
[ceph-node1][DEBUG ]     ]
[ceph-node1][DEBUG ]   }, 
[ceph-node1][DEBUG ]   "features": {
[ceph-node1][DEBUG ]     "quorum_con": "4611087854031667195", 
[ceph-node1][DEBUG ]     "quorum_mon": [
[ceph-node1][DEBUG ]       "kraken", 
[ceph-node1][DEBUG ]       "luminous", 
[ceph-node1][DEBUG ]       "mimic", 
[ceph-node1][DEBUG ]       "osdmap-prune"
[ceph-node1][DEBUG ]     ], 
[ceph-node1][DEBUG ]     "required_con": "144115738102218752", 
[ceph-node1][DEBUG ]     "required_mon": [
[ceph-node1][DEBUG ]       "kraken", 
[ceph-node1][DEBUG ]       "luminous", 
[ceph-node1][DEBUG ]       "mimic", 
[ceph-node1][DEBUG ]       "osdmap-prune"
[ceph-node1][DEBUG ]     ]
[ceph-node1][DEBUG ]   }, 
[ceph-node1][DEBUG ]   "monmap": {
[ceph-node1][DEBUG ]     "created": "2023-05-06 19:37:42.410479", 
[ceph-node1][DEBUG ]     "epoch": 1, 
[ceph-node1][DEBUG ]     "features": {
[ceph-node1][DEBUG ]       "optional": [], 
[ceph-node1][DEBUG ]       "persistent": [
[ceph-node1][DEBUG ]         "kraken", 
[ceph-node1][DEBUG ]         "luminous", 
[ceph-node1][DEBUG ]         "mimic", 
[ceph-node1][DEBUG ]         "osdmap-prune"
[ceph-node1][DEBUG ]       ]
[ceph-node1][DEBUG ]     }, 
[ceph-node1][DEBUG ]     "fsid": "bf6cea08-aaf9-4f2c-9316-f1d1a66fcbc1", 
[ceph-node1][DEBUG ]     "modified": "2023-05-06 19:37:42.410479", 
[ceph-node1][DEBUG ]     "mons": [
[ceph-node1][DEBUG ]       {
[ceph-node1][DEBUG ]         "addr": "192.168.140.10:6789/0", 
[ceph-node1][DEBUG ]         "name": "ceph-node1", 
[ceph-node1][DEBUG ]         "public_addr": "192.168.140.10:6789/0", 
[ceph-node1][DEBUG ]         "rank": 0
[ceph-node1][DEBUG ]       }
[ceph-node1][DEBUG ]     ]
[ceph-node1][DEBUG ]   }, 
[ceph-node1][DEBUG ]   "name": "ceph-node1", 
[ceph-node1][DEBUG ]   "outside_quorum": [], 
[ceph-node1][DEBUG ]   "quorum": [
[ceph-node1][DEBUG ]     0
[ceph-node1][DEBUG ]   ], 
[ceph-node1][DEBUG ]   "rank": 0, 
[ceph-node1][DEBUG ]   "state": "leader", 
[ceph-node1][DEBUG ]   "sync_provider": []
[ceph-node1][DEBUG ] }
[ceph-node1][DEBUG ] ********************************************************************************
[ceph-node1][INFO  ] monitor: mon.ceph-node1 is running
[ceph-node1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node1.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph-node1
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] find the location of an executable
[ceph-node1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node1.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph-node1 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmp0Qv_jH
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] get remote short hostname
[ceph-node1][DEBUG ] fetch remote file
[ceph-node1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-node1.asok mon_status
[ceph-node1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node1/keyring auth get client.admin
[ceph-node1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node1/keyring auth get client.bootstrap-mds
[ceph-node1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node1/keyring auth get client.bootstrap-mgr
[ceph-node1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node1/keyring auth get client.bootstrap-osd
[ceph-node1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-node1/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmp0Qv_jH

3.查看当前目录

可以看到,多了很多以keyring结尾的令牌文件

[root@ceph-node1 ceph]# ls /etc/ceph/
ceph.bootstrap-mds.keyring  ceph.bootstrap-rgw.keyring  ceph-deploy-ceph.log
ceph.bootstrap-mgr.keyring  ceph.client.admin.keyring   ceph.mon.keyring
ceph.bootstrap-osd.keyring  ceph.conf                   rbdmap

4.查看monitor状态

[root@ceph-node1 ceph]# ceph health
HEALTH_OK	#正常的状态

5.将配置信息同步到所有ceph-node节点

[root@ceph-node1 ceph]# ceph-deploy admin ceph-node1 ceph-node2 ceph-node3
#以下是部分提示信息
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph-node1 ceph-node2 ceph-node3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fee074b56c8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-node1', 'ceph-node2', 'ceph-node3']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7fee07d44320>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node1
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node2
The authenticity of host 'ceph-node2 (192.168.140.11)' can't be established.
ECDSA key fingerprint is SHA256:HBhmMUovAvw4QMjOfLJ0JwvmtX3v5ZH/nfQlC0pjn08.
ECDSA key fingerprint is MD5:ae:9f:42:eb:d0:64:0a:7b:7a:54:5e:95:88:d9:7c:bd.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph-node2' (ECDSA) to the list of known hosts.
[ceph-node2][DEBUG ] connected to host: ceph-node2 
[ceph-node2][DEBUG ] detect platform information from remote host
[ceph-node2][DEBUG ] detect machine type
[ceph-node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node3
The authenticity of host 'ceph-node3 (192.168.140.12)' can't be established.
ECDSA key fingerprint is SHA256:n/u8MvqtLiuP3pccTUPh6iVRxsgVTkkcjPZXNxKGOS4.
ECDSA key fingerprint is MD5:f9:45:27:33:c0:49:7d:d3:c7:53:9b:95:cd:95:8e:ea.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph-node3' (ECDSA) to the list of known hosts.
[ceph-node3][DEBUG ] connected to host: ceph-node3 
[ceph-node3][DEBUG ] detect platform information from remote host
[ceph-node3][DEBUG ] detect machine type
[ceph-node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

6.查看其他节点上的配置文件

如果有配置文件,那就说明都同步过来了

[root@ceph-node2 ~]# ls /etc/ceph/
ceph.client.admin.keyring  ceph.conf  rbdmap  tmpo3DJN3

[root@ceph-node3 ~]# ls /etc/ceph/
ceph.client.admin.keyring  ceph.conf  rbdmap  tmpY51J7A

7.查看集群状态

[root@ceph-node1 ceph]# ceph -s
  cluster:
    id:     bf6cea08-aaf9-4f2c-9316-f1d1a66fcbc1
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-node1	#只有1个monitor服务
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

8.新增多个monitor

防止单点故障的出现,配置多个monitor

[root@ceph-node1 ceph]# ceph-deploy mon add ceph-node2
[root@ceph-node1 ceph]# ceph-deploy mon add ceph-node3
#以下是部分提示信息
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon add ceph-node3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : add
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4d23612320>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-node3']
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7f4d23662500>
[ceph_deploy.cli][INFO  ]  address                       : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][INFO  ] ensuring configuration of new mon host: ceph-node3
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node3
[ceph-node3][DEBUG ] connected to host: ceph-node3 
[ceph-node3][DEBUG ] detect platform information from remote host
[ceph-node3][DEBUG ] detect machine type
[ceph-node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host ceph-node3
[ceph_deploy.mon][DEBUG ] using mon address by resolving host: 192.168.140.12
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-node3 ...
[ceph-node3][DEBUG ] connected to host: ceph-node3 
[ceph-node3][DEBUG ] detect platform information from remote host
[ceph-node3][DEBUG ] detect machine type
[ceph-node3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.9.2009 Core
[ceph-node3][DEBUG ] determining if provided host has same hostname in remote
[ceph-node3][DEBUG ] get remote short hostname
[ceph-node3][DEBUG ] adding mon to ceph-node3
[ceph-node3][DEBUG ] get remote short hostname
[ceph-node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node3][DEBUG ] create the mon path if it does not exist
[ceph-node3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-node3/done
[ceph-node3][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-node3/done
[ceph-node3][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-node3.mon.keyring
[ceph-node3][DEBUG ] create the monitor keyring file
[ceph-node3][INFO  ] Running command: ceph --cluster ceph mon getmap -o /var/lib/ceph/tmp/ceph.ceph-node3.monmap
[ceph-node3][WARNIN] got monmap epoch 2
[ceph-node3][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i ceph-node3 --monmap /var/lib/ceph/tmp/ceph.ceph-node3.monmap --keyring /var/lib/ceph/tmp/ceph-ceph-node3.mon.keyring --setuser 167 --setgroup 167
[ceph-node3][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-node3.mon.keyring
[ceph-node3][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-node3][DEBUG ] create the init path if it does not exist
[ceph-node3][INFO  ] Running command: systemctl enable ceph.target
[ceph-node3][INFO  ] Running command: systemctl enable ceph-mon@ceph-node3
[ceph-node3][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph-node3.service to /usr/lib/systemd/system/ceph-mon@.service.
[ceph-node3][INFO  ] Running command: systemctl start ceph-mon@ceph-node3
[ceph-node3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node3.asok mon_status
[ceph-node3][WARNIN] ceph-node3 is not defined in `mon initial members`
[ceph-node3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node3.asok mon_status
[ceph-node3][DEBUG ] ********************************************************************************
[ceph-node3][DEBUG ] status for monitor: mon.ceph-node3
[ceph-node3][DEBUG ] {
[ceph-node3][DEBUG ]   "election_epoch": 1, 
[ceph-node3][DEBUG ]   "extra_probe_peers": [
[ceph-node3][DEBUG ]     "192.168.140.11:6789/0"
[ceph-node3][DEBUG ]   ], 
[ceph-node3][DEBUG ]   "feature_map": {
[ceph-node3][DEBUG ]     "mon": [
[ceph-node3][DEBUG ]       {
[ceph-node3][DEBUG ]         "features": "0x3ffddff8ffacfffb", 
[ceph-node3][DEBUG ]         "num": 1, 
[ceph-node3][DEBUG ]         "release": "luminous"
[ceph-node3][DEBUG ]       }
[ceph-node3][DEBUG ]     ]
[ceph-node3][DEBUG ]   }, 
[ceph-node3][DEBUG ]   "features": {
[ceph-node3][DEBUG ]     "quorum_con": "0", 
[ceph-node3][DEBUG ]     "quorum_mon": [], 
[ceph-node3][DEBUG ]     "required_con": "144115188346404864", 
[ceph-node3][DEBUG ]     "required_mon": [
[ceph-node3][DEBUG ]       "kraken", 
[ceph-node3][DEBUG ]       "luminous", 
[ceph-node3][DEBUG ]       "mimic", 
[ceph-node3][DEBUG ]       "osdmap-prune"
[ceph-node3][DEBUG ]     ]
[ceph-node3][DEBUG ]   }, 
[ceph-node3][DEBUG ]   "monmap": {
[ceph-node3][DEBUG ]     "created": "2023-05-06 19:37:42.410479", 
[ceph-node3][DEBUG ]     "epoch": 3, 
[ceph-node3][DEBUG ]     "features": {
[ceph-node3][DEBUG ]       "optional": [], 
[ceph-node3][DEBUG ]       "persistent": [
[ceph-node3][DEBUG ]         "kraken", 
[ceph-node3][DEBUG ]         "luminous", 
[ceph-node3][DEBUG ]         "mimic", 
[ceph-node3][DEBUG ]         "osdmap-prune"
[ceph-node3][DEBUG ]       ]
[ceph-node3][DEBUG ]     }, 
[ceph-node3][DEBUG ]     "fsid": "bf6cea08-aaf9-4f2c-9316-f1d1a66fcbc1", 
[ceph-node3][DEBUG ]     "modified": "2023-05-06 19:48:46.297830", 
[ceph-node3][DEBUG ]     "mons": [
[ceph-node3][DEBUG ]       {
[ceph-node3][DEBUG ]         "addr": "192.168.140.10:6789/0", 
[ceph-node3][DEBUG ]         "name": "ceph-node1", 
[ceph-node3][DEBUG ]         "public_addr": "192.168.140.10:6789/0", 
[ceph-node3][DEBUG ]         "rank": 0
[ceph-node3][DEBUG ]       }, 
[ceph-node3][DEBUG ]       {
[ceph-node3][DEBUG ]         "addr": "192.168.140.11:6789/0", 
[ceph-node3][DEBUG ]         "name": "ceph-node2", 
[ceph-node3][DEBUG ]         "public_addr": "192.168.140.11:6789/0", 
[ceph-node3][DEBUG ]         "rank": 1
[ceph-node3][DEBUG ]       }, 
[ceph-node3][DEBUG ]       {
[ceph-node3][DEBUG ]         "addr": "192.168.140.12:6789/0", 
[ceph-node3][DEBUG ]         "name": "ceph-node3", 
[ceph-node3][DEBUG ]         "public_addr": "192.168.140.12:6789/0", 
[ceph-node3][DEBUG ]         "rank": 2
[ceph-node3][DEBUG ]       }
[ceph-node3][DEBUG ]     ]
[ceph-node3][DEBUG ]   }, 
[ceph-node3][DEBUG ]   "name": "ceph-node3", 
[ceph-node3][DEBUG ]   "outside_quorum": [], 
[ceph-node3][DEBUG ]   "quorum": [], 
[ceph-node3][DEBUG ]   "rank": 2, 
[ceph-node3][DEBUG ]   "state": "electing", 
[ceph-node3][DEBUG ]   "sync_provider": []
[ceph-node3][DEBUG ] }
[ceph-node3][DEBUG ] ********************************************************************************
[ceph-node3][INFO  ] monitor: mon.ceph-node3 is running		#可以看到这里我们新添加的monitor已经运行了

9.查看集群状态

[root@ceph-node1 ceph]# ceph -s
  cluster:
    id:     bf6cea08-aaf9-4f2c-9316-f1d1a66fcbc1
    health: HEALTH_WARN
            clock skew detected on mon.ceph-node2, mon.ceph-node3
 
  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3	#可以看到这里已经有3个monitor了
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

六、创建ceph mgr

cephL版本后,添加Ceph Manager Daemon,简称ceph-mgr
该组件的出现主要是为了缓解ceph-monitor的压力,分担了moniotr的工作,例如==插件管理==等,以更好的管理集群。

1.在ceph-node1节点创建ceph mgr服务

[root@ceph-node1 ceph]# ceph-deploy mgr create ceph-node1
#以下是部分提示信息
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph-node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph-node1', 'ceph-node1')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f78a6252b90>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f78a6b33230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-node1:ceph-node1
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-node1
[ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node1][WARNIN] mgr keyring does not exist yet, creating one
[ceph-node1][DEBUG ] create a keyring file
[ceph-node1][DEBUG ] create path recursively if it doesn't exist
[ceph-node1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-node1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-node1/keyring
[ceph-node1][INFO  ] Running command: systemctl enable ceph-mgr@ceph-node1
[ceph-node1][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-node1.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-node1][INFO  ] Running command: systemctl start ceph-mgr@ceph-node1
[ceph-node1][INFO  ] Running command: systemctl enable ceph.target

2.查看集群状态

[root@ceph-node1 ceph]# ceph -s
  cluster:
    id:     bf6cea08-aaf9-4f2c-9316-f1d1a66fcbc1
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
            clock skew detected on mon.ceph-node2, mon.ceph-node3
 
  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3
    mgr: ceph-node1(active)		#可以看到mgr组件在node1运行
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

3.添加多个ceph mgr

同理,避免单点故障

[root@ceph-node1 ceph]# ceph-deploy mgr create ceph-node2
[root@ceph-node1 ceph]# ceph-deploy mgr create ceph-node3
#以下是部分提示信息
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph-node3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph-node3', 'ceph-node3')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5e2921db90>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f5e29afe230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-node3:ceph-node3
[ceph-node3][DEBUG ] connected to host: ceph-node3 
[ceph-node3][DEBUG ] detect platform information from remote host
[ceph-node3][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-node3
[ceph-node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node3][WARNIN] mgr keyring does not exist yet, creating one
[ceph-node3][DEBUG ] create a keyring file
[ceph-node3][DEBUG ] create path recursively if it doesn't exist
[ceph-node3][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-node3 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-node3/keyring
[ceph-node3][INFO  ] Running command: systemctl enable ceph-mgr@ceph-node3
[ceph-node3][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-node3.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-node3][INFO  ] Running command: systemctl start ceph-mgr@ceph-node3
[ceph-node3][INFO  ] Running command: systemctl enable ceph.target

4.查看集群状态

[root@ceph-node1 ceph]# ceph -s
  cluster:
    id:     bf6cea08-aaf9-4f2c-9316-f1d1a66fcbc1
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
            clock skew detected on mon.ceph-node2, mon.ceph-node3
 
  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3
    mgr: ceph-node1(active), standbys: ceph-node2, ceph-node3	#可以看到,已经添加了多个mgr,但是只有node1在活动,其他节点作为备用节点
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

七、创建OSD(数据盘)

1.检查是否新增了硬盘

如果还没有新增硬盘,请回到第二步添加虚拟硬盘!

[root@ceph-node1 ceph]# for i in 10 11 12
> do
> ssh root@192.168.140.$i lsblk | grep sdb
> done
sdb               8:16   0   20G  0 disk 
sdb               8:16   0   20G  0 disk 
sdb               8:16   0   20G  0 disk

2.初始化磁盘,清空磁盘数据

[root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node1 /dev/sdb
[root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node2 /dev/sdb
[root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node3 /dev/sdb
#以下是部分提示信息
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph-node3 /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7faf801fd830>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ceph-node3
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7faf80238a28>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdb']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph-node3
[ceph-node3][DEBUG ] connected to host: ceph-node3 
[ceph-node3][DEBUG ] detect platform information from remote host
[ceph-node3][DEBUG ] detect machine type
[ceph-node3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph-node3][DEBUG ] zeroing last few blocks of device
[ceph-node3][DEBUG ] find the location of an executable
[ceph-node3][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdb
[ceph-node3][WARNIN] --> Zapping: /dev/sdb
[ceph-node3][WARNIN] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph-node3][WARNIN] Running command: /bin/dd if=/dev/zero of=/dev/sdb bs=1M count=10 conv=fsync
[ceph-node3][WARNIN]  stderr: 记录了10+0 的读入
[ceph-node3][WARNIN] 记录了10+0 的写出
[ceph-node3][WARNIN] 10485760字节(10 MB)已复制,0.0090652 秒,1.2 GB/秒
[ceph-node3][WARNIN] --> Zapping successful for: <Raw Device: /dev/sdb>

3.将磁盘创建为osd

[root@ceph-node1 ceph]# ceph-deploy osd create --data /dev/sdb ceph-node1
[root@ceph-node1 ceph]# ceph-deploy osd create --data /dev/sdb ceph-node2
[root@ceph-node1 ceph]# ceph-deploy osd create --data /dev/sdb ceph-node3
#以下是部分提示信息
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create --data /dev/sdb ceph-node3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f9ffdc20950>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-node3
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f9ffdc579b0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[ceph-node3][DEBUG ] connected to host: ceph-node3 
[ceph-node3][DEBUG ] detect platform information from remote host
[ceph-node3][DEBUG ] detect machine type
[ceph-node3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node3
[ceph-node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node3][WARNIN] osd keyring does not exist yet, creating one
[ceph-node3][DEBUG ] create a keyring file
[ceph-node3][DEBUG ] find the location of an executable
[ceph-node3][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[ceph-node3][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-node3][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 85add657-24b1-4a4f-a68b-a3d7d67d45a9
[ceph-node3][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-0b50d828-e42a-4226-8418-67369ec97bca /dev/sdb
[ceph-node3][WARNIN]  stdout: Physical volume "/dev/sdb" successfully created.
[ceph-node3][WARNIN]  stdout: Volume group "ceph-0b50d828-e42a-4226-8418-67369ec97bca" successfully created
[ceph-node3][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-85add657-24b1-4a4f-a68b-a3d7d67d45a9 ceph-0b50d828-e42a-4226-8418-67369ec97bca
[ceph-node3][WARNIN]  stdout: Logical volume "osd-block-85add657-24b1-4a4f-a68b-a3d7d67d45a9" created.
[ceph-node3][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-node3][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
[ceph-node3][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-0b50d828-e42a-4226-8418-67369ec97bca/osd-block-85add657-24b1-4a4f-a68b-a3d7d67d45a9
[ceph-node3][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph-node3][WARNIN] Running command: /bin/ln -s /dev/ceph-0b50d828-e42a-4226-8418-67369ec97bca/osd-block-85add657-24b1-4a4f-a68b-a3d7d67d45a9 /var/lib/ceph/osd/ceph-2/block
[ceph-node3][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
[ceph-node3][WARNIN]  stderr: got monmap epoch 3
[ceph-node3][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQBwRFZkRSJCJBAAWolZtOSfTuFfLaSRWlyBnA==
[ceph-node3][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-2/keyring
[ceph-node3][WARNIN] added entity osd.2 auth auth(auid = 18446744073709551615 key=AQBwRFZkRSJCJBAAWolZtOSfTuFfLaSRWlyBnA== with 0 caps)
[ceph-node3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
[ceph-node3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
[ceph-node3][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 85add657-24b1-4a4f-a68b-a3d7d67d45a9 --setuser ceph --setgroup ceph
[ceph-node3][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdb
[ceph-node3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph-node3][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-0b50d828-e42a-4226-8418-67369ec97bca/osd-block-85add657-24b1-4a4f-a68b-a3d7d67d45a9 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
[ceph-node3][WARNIN] Running command: /bin/ln -snf /dev/ceph-0b50d828-e42a-4226-8418-67369ec97bca/osd-block-85add657-24b1-4a4f-a68b-a3d7d67d45a9 /var/lib/ceph/osd/ceph-2/block
[ceph-node3][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
[ceph-node3][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph-node3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph-node3][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-2-85add657-24b1-4a4f-a68b-a3d7d67d45a9
[ceph-node3][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-2-85add657-24b1-4a4f-a68b-a3d7d67d45a9.service to /usr/lib/systemd/system/ceph-volume@.service.
[ceph-node3][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@2
[ceph-node3][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@2.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph-node3][WARNIN] Running command: /bin/systemctl start ceph-osd@2
[ceph-node3][WARNIN] --> ceph-volume lvm activate successful for osd ID: 2
[ceph-node3][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[ceph-node3][INFO  ] checking OSD status...
[ceph-node3][DEBUG ] find the location of an executable
[ceph-node3][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-node3 is now ready for osd use.

4.查看集群状态

[root@ceph-node1 ceph]# ceph -s
  cluster:
    id:     bf6cea08-aaf9-4f2c-9316-f1d1a66fcbc1
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3
    mgr: ceph-node1(active), standbys: ceph-node2, ceph-node3
    osd: 3 osds: 3 up, 3 in		#一共3个osd
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail	#一共60G的空间,所有osd之和
    pgs:

如果您能坚持到这里,那么恭喜你,ceph基本集群环境已经部署完成了!

拓展:集群节点扩容(即集群添加新节点)

  • 新节点配置系统基础环境(主机名解析、时间同步、ceph仓库)
  • 新节点安装cephceph-radosgw软件
  • 将集群文件同步给新节点
    • ceph-deploy admin <新节点>

  • 按需求在新节点上添加osd

八、启用ceph dashboard插件(可选的)

ceph dashboard主要提供webUI界面

1.确认mgr主节点

以下操作将在mgr的主节点操作!

[root@ceph-node1 ceph]# ceph -s
  cluster:
    id:     bf6cea08-aaf9-4f2c-9316-f1d1a66fcbc1
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3
    mgr: ceph-node1(active), standbys: ceph-node2, ceph-node3	#可以看到node1是活动的主节点
    osd: 3 osds: 3 up, 3 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs:

2.开启dashboard模块

[root@ceph-node1 ceph]# ceph mgr module enable dashboard

如果提示Error ENOENT: all mgr daemons do not support module 'dashboard', pass --force to force enablement则需要在所有mgr节点安装ceph-mgr-dashboard软件

3.查看所有模块

[root@ceph-node1 ceph]# ceph mgr module ls

{
    "enabled_modules": [	#已启用的模块
        "balancer",
        "crash",
        "dashboard",
        "iostat",
        "restful",
        "status"
    ],
    "disabled_modules": [	#未启用的模块
        {
            "name": "hello",
            "can_run": true,
            "error_string": ""
        },
        {
            "name": "influx",
            "can_run": false,
            "error_string": "influxdb python module not found"
        },
        {
            "name": "localpool",
            "can_run": true,
            "error_string": ""
        },
        {
            "name": "prometheus",
            "can_run": true,
            "error_string": ""
        },
        {
            "name": "selftest",
            "can_run": true,
            "error_string": ""
        },
        {
            "name": "smart",
            "can_run": true,
            "error_string": ""
        },
        {
            "name": "telegraf",
            "can_run": true,
            "error_string": ""
        },
        {
            "name": "telemetry",
            "can_run": true,
            "error_string": ""
        },
        {
            "name": "zabbix",
            "can_run": true,
            "error_string": ""
        }
    ]
}

4.创建自签证书

因为dashboard模块的web界面提供https服务,我们需要生成ssl证书

[root@ceph-node1 ceph]# ceph dashboard create-self-signed-cert
Self-signed certificate created

5.生成dashboard需要的自签证书

[root@ceph-node1 ceph]# mkdir /etc/mgr-dashboard
[root@ceph-node1 ceph]# cd /etc/mgr-dashboard
[root@ceph-node1 mgr-dashboard]# openssl req -new -nodes -x509 -subj "/O=IT-ceph/CN=cn" -days 3650 -keyout dashboard.key -out dashboard.crt -extensions v3_ca
Generating a 2048 bit RSA private key
...................................................+++
.............................+++
writing new private key to 'dashboard.key'
-----
[root@ceph-node1 mgr-dashboard]# ls
dashboard.crt  dashboard.key

6.修改dashboard访问地址

[root@ceph-node1 mgr-dashboard]# ceph config set mgr mgr/dashboard/server_addr 192.168.140.10

7.修改dashboard监听端口(可选的)

dashboard默认监听端口:8443

[root@ceph-node1 mgr-dashboard]# ceph config set mgr mgr/dashboard/server_port 8888

8.重启dashboard让修改生效

[root@ceph-node1 mgr-dashboard]# ceph mgr module disable dashboard
[root@ceph-node1 mgr-dashboard]# ceph mgr module enable dashboard

9.查看mgr service状态

[root@ceph-node1 mgr-dashboard]# ceph mgr services
{
    "dashboard": "https://192.168.140.10:8888/"
}

10.设置dashboard提供web界面认证的用户和密码

用户名:wsjj 密码:redhat

[root@ceph-node1 mgr-dashboard]# ceph dashboard set-login-credentials wsjj redhat
Username and password updated

11.浏览器测试访问

因为我们是==自签证书==,并不是网上公认的公有CA颁发的证书,所以这是正常提示!

ceph10

ceph12

ceph13