HCRM博客

如何在CentOS上安装Ceph?详细步骤与注意事项解析

在CentOS上安装Ceph是一个涉及多个步骤的过程,旨在构建一个稳定、可扩展的分布式存储系统,Ceph是一个高度可扩展、开源的分布式存储平台,适用于对象存储、块存储和文件系统存储,下面将详细介绍如何在CentOS上安装Ceph:

一、安装前的准备

1、关闭防火墙

如何在CentOS上安装Ceph?详细步骤与注意事项解析-图1
(图片来源网络,侵权删除)
   systemctl stop firewalld
   systemctl disable firewalld

2、关闭SELinux

   sed i 's/enforcing/disabled/' /etc/selinux/config
   setenforce 0

3、设置主机名

   hostnamectl sethostname ceph1
   hostnamectl sethostname ceph2
   hostnamectl sethostname ceph3

4、配置免密登录

   # 在ceph1上生成密钥对
   sshkeygen
   # 分发公钥到其他节点
   sshcopyid ceph2
   sshcopyid ceph3

5、配置hosts文件

   cat >> /etc/hosts <<EOF
   192.168.161.137 ceph1
   192.168.161.135 ceph2
   192.168.161.136 ceph3
   EOF

6、启用NTP服务

   systemctl start ntpd
   systemctl enable ntpd

7、创建Ceph目录并授权

如何在CentOS上安装Ceph?详细步骤与注意事项解析-图2
(图片来源网络,侵权删除)
   mkdir p /usr/local/ceph/{admin,etc,lib,logs}
   chown R 167:167 /usr/local/ceph/
   chmod 777 R /usr/local/ceph/

二、安装Docker

1、卸载旧版本(如果有)

   yum y remove docker dockercommon dockerselinux dockerengine

2、安装依赖包

   yum y install yumutils devicemapperpersistentdata lvm2

3、安装Docker

   yum y install dockerce
   systemctl start docker
   systemctl enable docker

三、部署Ceph集群

1、创建Ceph工作目录

   mkdir p /var/lib/ceph/{bootstraposd,mon,mgr,osd} /etc/ceph

2、配置cephadm

   sudo yum install y centosreleasecephnautilus
   sudo yum install y cephadm

3、启动新集群

如何在CentOS上安装Ceph?详细步骤与注意事项解析-图3
(图片来源网络,侵权删除)
   cephadm bootstrap monip 192.168.161.137

4、添加Ceph节点

   cephadm addhost name ceph2 addr 192.168.161.135
   cephadm addhost name ceph3 addr 192.168.161.136

5、添加OSD

   cephadm shell <<EOF
   osd_ids=$(ceph orch host list place=osd)
   for osd in $osd_ids; do
     ceph orch host add $osd label osd
   done
   EOF

6、编辑crush map bucket

   ceph osd crush createormove osd.* root=default host=ceph1

7、编辑crush map rules

   cat > /etc/ceph/ceph.conf <<EOF
   global {
     fsid = $(uuidgen)
     cluster network = 192.168.161.0/24
     auth cluster required = cephx
     auth service required = cephx
     auth user required = cephx
     osd caps = "osd allowed keys"
     mon allow pool delete = yes
     osd full ratio = .95
     osd backfillfull_ratio = .7
     osd max backfill = 1000
   }
   public network = 192.168.161.0/24
   cluster network = 192.168.161.0/24
   EOF

8、启动集群

   ceph orch apply osd spec all placement="ceph1,ceph2,ceph3"

四、配置RGW Service(可选)

1、设置radosgw区域和用户

   ceph dashboard createadminuser admin http://ceph1:7000/dashboard

2、修改RGW相关pool的位置

   ceph osd pool create rbd rbd_data 128M pgnum 100 pgpnum 100 cachemode=none targetsize=5G targetmaxobjects=10000000 expectednumobjectcopies=1

3、配置RGW高可用

   ceph orch service set radosgw mon 'allow radosgw'
   ceph orch service set rbd mon 'allow rbd'

五、部署NFS Service(可选)

1、设置cephfs

   ceph nfs cluster create mynfs nfs mynfs placement "ceph1"

2、导出NFS

   ceph nfs cluster export mynfs /mynfs *(rw,sync,no_subtree_check,no_root_squash) 192.168.161.0/24

3、客户端挂载NFS

   mount t nfs 192.168.161.137:/mynfs /mnt/nfs

六、配置RBD Service(可选)

1、配置rbd

   ceph rbd pool create rbd rbd_data 128M pgnum 100 pgpnum 100 cachemode=none targetsize=5G targetmaxobjects=10000000 expectednumobjectcopies=1

2、客户端挂载rbd

   rbd map rbd:vdisk ~/vdisk pool rbd secret /etc/ceph/ceph.client.admin.keyring

3、配置iSCSI

   tgtadm llt iscsi initiatorname=iqn.202310.com.example:storage:tgt targetname=iqn.202310.com.example:target:tgt op show | grep connected:true | wc l && echo "iSCSI Target is already configured" || tgtadm llt iscsi initiatorname=iqn.202310.com.example:storage:tgt targetname=iqn.202310.com.example:target:tgt op new mode ofof retries 100 portals 192.168.161.137 addtoscheme default,discovery tidrange 1000020000 luns 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0|grep "Target Name:" | cut d':' f2 | xargs I {} tgtadm llt iscsi initiatorname=iqn.202310.com.example:storage:tgt targetname={} op bind I ALL T ALLLUNs 031 modeoof driver iscsi transport tcp portalgroups portalgroup=default portals 192.168.161.137 addtoscheme default,discovery tidrange 1000020000 luns 031 modeoof driver iscsi transport tcp portalgroups portalgroup=default portals 192.168.161.137 addtoscheme default,discovery tidrange 1000020000 luns 031 modeoof driver iscsi transport tcp portalgroups portalgroup=default portals 192.168.161.137 addtoscheme default,discovery tidrange 1000020000 luns 031 modeoof driver iscsi transport tcp portalgroups portalgroup=default portals 192.168.161.137 addtoscheme default
分享:
扫描分享到社交APP
上一篇
下一篇