openpstack-Pike对接cephRBD单集群的配置步骤-成都创新互联网站建设

关于创新互联

多方位宣传企业产品与服务 突出企业形象

公司简介 公司的服务 荣誉资质 新闻动态 联系我们

openpstack-Pike对接cephRBD单集群的配置步骤

环境说明
openpstack-Pike对接cephRBD单集群,配置简单,可参考openstack官网或者ceph官网;
1.Openstack官网参考配置:
https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/ceph-rbd-volume-driver.html
2.Ceph官网参考配置:
https://docs.ceph.com/docs/master/install/install-ceph-deploy/
由于物理环境和业务需求变更,当前配置云计算环境要求一套openstack对接后台两套不同版本的cephRBD存储集群;
此处以现有以下正常运行环境展开配置;
1)openstack-Pike
2)Ceph Luminous 12.2.5
3)Ceph Nautilus 14.2.7
其中,openstack对接ceph Luminous配置完成,且正常运行。现在此套openstack+ceph环境基础上,新增一套ceph Nautilus存储集群,使openstack能够同时调用两套存储资源。

成都创新互联公司专注于正安企业网站建设,成都响应式网站建设公司,商城网站制作。正安网站建设公司,为正安等地区提供建站服务。全流程定制制作,专业设计,全程项目跟踪,成都创新互联公司专业和态度为您提供的服务

配置步骤
1.拷贝配置文件
#拷贝配置文件、cinder账户key到openstack的cinder节点
/etc/ceph/ceph3.conf
/etc/ceph/ceph.client.cinder2.keyring
#此处使用cinder账户,仅拷贝cinder2账户的key即可

2.创建存储池
#OSD添加完成后,创建存储池,指定存储池pg/pgp数,配置其对应功能模式
ceph osd pool create volumes 512 512
ceph osd pool create backups 128 128
ceph osd pool create vms 512 512
ceph osd pool create images 128 128

ceph osd pool application enable volumes rbd
ceph osd pool application enable backups rbd
ceph osd pool application enable vms rbd
ceph osd pool application enable images rbd

3.创建集群访问账户
ceph auth get-or-create client.cinder2 mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.cinder2-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

4.查看进程信息
#查看当前openstack的cinder组件服务进程
source /root/keystonerc.admin
cinder service-list

5.修改配置文件
#修改cinder配置文件
[DEFAULT]
enabled_backends = ceph2,ceph3

[ceph2]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph2
rbd_pool = volumes1
rbd_ceph_conf = /etc/ceph2/ceph2.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder1
rbd_secret_uuid = **

[ceph3]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph3
rbd_pool = volumes2
rbd_ceph_conf = /etc/ceph/ceph3/ceph3.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder2
rbd_secret_uuid = **

6.重启服务
#重启cinder-volume服务
service openstack-cinder-volume restart Redirecting to /bin/systemctl restart openstack-cinder-volume.service
service openstack-cinder-scheduler restart Redirecting to /bin/systemctl restart openstack-cinder-scheduler.service

7.查看进程
cinder service-list

8.创建卷测试
#卷类型绑定
cinder type-create ceph2
cinder type-key ceph2 set volume_backend_name=ceph2
cinder type-create ceph3
cinder type-key ceph3 set volume_backend_name=ceph3

9.查看绑定结果
cinder create --volume-type ceph2 --display_name {volume-name}{volume-size}
cinder create --volume-type ceph3 --display_name {volume-name}{volume-size}

配置libvirt
1.将第二套ceph的密钥添加到nova-compute节点的libvirt
#为了使VM可以访问到第二套cephRBD云盘,需要在nova-compute节点上将第二套ceph的cinder用户的密钥添加到libvirt
ceph -c /etc/ceph3/ceph3/ceph3.conf -k /etc/ceph3/ceph.client.cinder2.keyring auth get-key client.cinder2 |tee client.cinder2.key

#绑定之前cinder.conf中第二个ceph集群的uuid
cat > secret2.xml <
***

client.cinder2 secret


#以上整段拷贝执行即可,替换uuid值

sudo virsh secret-define --file secret2.xml

sudo virsh secret-set-value --secret *****--base64 $(cat client.cinder2.key) rm client.cinder2.key secret2.xml
#删除提示信息,输入Y即可

2.验证配置是否生效
#通过之前创建的两个类型的云盘挂载到openstack的VM验证配置
nova volume-attach {instance-id}{volume1-id}
nova volume-attach {instance-id}{volume2-id}


名称栏目:openpstack-Pike对接cephRBD单集群的配置步骤
标题来源:http://kswsj.cn/article/pgccsi.html

其他资讯