Using Ceph-deploy

syndicated

Install the ceph cluster

On each node :

create a user “ceph” and configure sudo for nopassword :

1
2
3
4
$ useradd -d /home/ceph -m ceph
$ passwd ceph
$ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
$ chmod 0440 /etc/sudoers.d/ceph

Update hosts file

1
2
3
4
5
6
$ vim /etc/hosts
192.168.0.100       cephnode-01 cephnode-01.local
192.168.0.101       cephnode-02 cephnode-02.local
192.168.0.102       cephnode-03 cephnode-03.local
192.168.0.103       cephnode-04 cephnode-04.local
192.168.0.104       cephnode-05 cephnode-05.local

On admin server

(for me on cephnode-01)

1
$ ssh-keygen

Deploy the key on each node

1
2
3
4
$ cluster="cephnode-01 cephnode-02 cephnode-03 cephnode-04 cephnode-05"
$ for i in $cluster; do
    ssh-copy-id ceph@$i
  done

Add option on ssh config to connect with ceph user :

1
2
3
$ vim /root/.ssh/config
Host ceph*
User ceph

Install ceph-deploy (Dumpling Version)

1
2
3
4
$ wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
$ echo deb http://ceph.com/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
$ apt-get update
$ apt-get install python-pkg-resources python-setuptools ceph-deploy

Install ceph on the cluster :

Before you need to create partition on ssd device (if use seperate journal)

For my exemple, I use :

sda 1: system partition
    2: swap
    5: osd journal (10 GB)
    6: osd journal (10 GB)
    7: osd journal (10 GB)
sdb  : osd
sdc  : osd
sdd  : osd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ mkdir ceph-deploy; cd ceph-deploy
$ ceph-deploy install $cluster
$ ceph-deploy new cephnode-01 cephnode-02 cephnode-03
$ ceph-deploy --overwrite-conf mon create cephnode-01 cephnode-02 cephnode-03
$ ceph-deploy gatherkeys cephnode-01
$ ceph-deploy osd create \
    cephnode-01:/dev/sdb:/dev/sda5 \
    cephnode-01:/dev/sdc:/dev/sda6 \
    cephnode-01:/dev/sdd:/dev/sda7 \
    cephnode-02:/dev/sdb:/dev/sda5 \
    cephnode-02:/dev/sdc:/dev/sda6 \
    cephnode-02:/dev/sdd:/dev/sda7 \
    cephnode-03:/dev/sdb:/dev/sda5 \
    cephnode-03:/dev/sdc:/dev/sda6 \
    cephnode-03:/dev/sdd:/dev/sda7 \
    cephnode-04:/dev/sdb:/dev/sda5 \
    cephnode-04:/dev/sdc:/dev/sda6 \
    cephnode-04:/dev/sdd:/dev/sda7 \
    cephnode-05:/dev/sdb:/dev/sda5 \
    cephnode-05:/dev/sdc:/dev/sda6 \
    cephnode-05:/dev/sdd:/dev/sda7

Destroy cluster and remove all data

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ ceph-deploy purgedata $cluster
$ ceph-deploy purge $cluster

$ for host in $cluster
  do
    ssh $host <<EOF
      sudo dd if=/dev/zero of=/dev/sdb bs=1M count=100
      sudo dd if=/dev/zero of=/dev/sdc bs=1M count=100
      sudo dd if=/dev/zero of=/dev/sdd bs=1M count=100
      sudo sgdisk -g --clear /dev/sdb
      sudo sgdisk -g --clear /dev/sdc
      sudo sgdisk -g --clear /dev/sdd
    EOF
  done