Last week Dmitry Borodaenko presented his talk on Ceph and OpenStack at the inaugural Silicon Valley Ceph User Group meeting. The meeting was well attended and also featured talks from Mellanox’s Eli Karpilovski and Inktank’s Kyle Bader. However, if you were unable to attend, the following transcript from Dmitry’s talk is a good recap just …Read more
Ceph and OpenStack in a Nutshell Ceph and Openstack in a Nutshell from Karan Singh
OpenStack Instance boot from Ceph VolumeFor a list of images to choose from to create a bootable volume[root@rdo /(keystone_admin)]# nova image-list+————————————–+—————————–+——–+——–+| ID …
Testing OpenStack Glance + RBDTo allow glance to keep images on ceph RBD volume , edit /etc/glance/glance-api.confdefault_store = rbd# ============ RBD Store Options =============================# Ceph configuration file path# If using cephx …
Testing OpenStack Cinder + RBDCreating a cinder volume provided by ceph backend[root@rdo /]#[root@rdo /]# cinder create –display-name cinder-ceph-vol1 –display-description “first cinder volume on ceph backend” 10+———————+—–…
ceph osd pool create volumes 128
ceph osd pool create images 128
[root@ceph-mon1 ceph]# scp ceph.conf openstack:/etc/ceph
yum install python-ceph
[root@ceph-mon1 ceph]# ceph-deploy install openstack
ceph auth get-or-create client.volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'
ceph auth get-or-create client.images mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth get-or-create client.images | ssh openstack tee /etc/ceph/ceph.client.images.keyring
ssh openstack chown glance:glance /etc/ceph/ceph.client.images.keyring
ceph auth get-or-create client.volumes | ssh openstack tee /etc/ceph/ceph.client.volumes.keyring
ssh openstack chown cinder:cinder /etc/ceph/ceph.client.volumes.keyring
ceph auth get-key client.volumes | ssh openstack tee client.volumes.key
libvirt create a secret.xml file
cat > secret.xml < <EOF
<secret ephemeral='no' private='no'>
<usage type='ceph'>
<name>client.volumes secret</name>
</usage>
EOF
# virsh secret-define --file secret.xml
# virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.volumes.key) && rm client.volumes.key secret.xml
/etc/glance/glance-api.conf
and add:default_store=rbd
rbd_store_user=images
rbd_store_pool=images
show_image_direct_url=True
/etc/cinder/cinder.conf
by adding:volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_pool=volumes
glance_api_version=2
libvirt
earlier:rbd_user=volumes
rbd_secret_uuid={uuid of secret}
service glance-api restart
service nova-compute restart
service cinder-volume restart
Ceph & OpenStack IntegrationWe can use Ceph Block Device with openstack through libvirt, which configures the QEMU interface tolibrbd. To use Ceph Block Devices with openstack , we must install QEMU, libvirt, and&…
Creating Block Device from CephFrom monitor node , use ceph-deploy to install Ceph on your ceph-client1 node.[root@ceph-mon1 ~]# ceph-deploy install ceph-client1[ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-deploy install ceph-client1[ceph_dep…
CEPH Storage ClusterInstalling Ceph Deploy ( ceph-mon1 )Update your repository and install ceph-deploy on ceph-mon1 node[ceph@ceph-mon1 ~]$ sudo yum update && sudo yum install ceph-deployLoaded plugins: downloadonly, fastestmirror, securityLoad…
Ceph-mon1 : First Monitor + Ceph-deploy machine (will be used to deploy ceph to other nodes )
Ceph-mon2 : Second Monitor ( for monitor quorum )
Ceph-mon3 : Third Monitor ( for monitor quorum )
Ceph-node1 : OSD node 1 with 10G X 1 for OS , 440G X 4 for 4 OSD
Ceph-node2 : OSD node 2 with 10G X 1 for OS , 440G X 4 for 4 OSD
Ceph-Deploy Version is 1.3.2 , Ceph Version 0.67.4 ( Dumpling )
All the Ceph Nodes may require some basic configuration work prior to deploying a Ceph Storage Cluster.
sudo useradd -d /home/ceph -m ceph
sudo passwd ceph
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers
sudo chmod 0440 /etc/sudoers
ceph@ceph-admin:~ [ceph@ceph-admin ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa): yes
Created directory '/home/ceph/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ceph/.ssh/id_rsa.
Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
The key fingerprint is:
48:86:ff:4e:ab:c3:f6:cb:7f:ba:46:33:10:e6:22:52 ceph@ceph-admin.csc.fi
The key's randomart image is:
+--[ RSA 2048]----+
| |
| E. o |
| .. oo . |
| . .+..o |
| . .o.S. |
| . + |
| . o. o |
| ++ .. . |
| ..+*+++ |
+-----------------+
[ceph@ceph-mon1 ~]$ ssh-copy-id ceph@ceph-node2
The authenticity of host 'ceph-node2 (192.168.1.38)' can't be established.
RSA key fingerprint is ac:31:6f:e7:bb:ed:f1:18:9e:6e:42:cc:48:74:8e:7b.
Are you sure you want to continue connecting (yes/no)? y
Please type 'yes' or 'no': yes
Warning: Permanently added 'ceph-node2,192.168.1.38' (RSA) to the list of known hosts.
ceph@ceph-node2's password:
Now try logging into the machine, with "ssh 'ceph@ceph-node2'", and check in: .ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[ceph@ceph-mon1 ~]$
sudo rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
su -c 'rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm'
sudo yum install snappy leveldb gdisk python-argparse gperftools-libs
su -c 'rpm -Uvh http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm'
[ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-dumpling/el6/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-dumpling/el6/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://ceph.com/rpm-dumpling/el6/SRPMS
enabled=0
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
mkdir -p /etc/ceph /var/lib/ceph/{tmp,mon,mds,bootstrap-osd} /var/log/ceph
CEPH Internals