This is obviously only interesting for testing purposes, but it was still painful enough without any complete guides and with current issues that Jewel and Hammer have running on Jessie.
- Make sure you can ssh to localhost without manually entering a password
- Install dnsmasq
apt-get install dnsmasq
- Make sure the host returned by hostname -f resolves to an ip address other than a loopback
- Add the apt source :
wget -q -O- ‘https://download.ceph.com/keys/release.asc’ | sudo apt-key add –
echo deb http://download.ceph.com/debian-hammer/ jessie main | sudo tee /etc/apt/sources.list.d/ceph.list
apt-get update
Important to notice that I added the hammer repo instead of the most recent one, jewel. I was able to get jewel running without any issues on Ubuntu, but on Jessie, it just refused to work.
- Install ceph-deploy – this tool will to all the heavy lifting for you, no need to manually configure everything.
apt-get install ceph-deploy
Easy enough, eh ? You want to go the manual way, instead of using ceph-deploy? Be my guest.
- Now let’s create the initial configurations with ceph-deploy. First we are going to cd into a new directory, where ceph-deploy will create config and access key files. For this example ceph-test-1 is the hostname returned by running hostname -f.
mkdir mycephfiles
cd mycephfiles
ceph-deploy new ceph-test-1
echo “osd crush chooseleaf type = 0” >> ceph.conf
echo “osd pool default size = 1” >> ceph.conf
The 2 last lines are there to ensure that ceph will be happy with just one node running and won’t wait for new nodes to join. Take a look at ceph.conf, it will be used in all the following steps.
- This step will install ceph itself. If we were installing into multiple nodes, this command would essentially ssh into those nodes, install ceph, and copy the configuration and key files created in the previous step.
ceph-deploy install –no-adjust-repos ceph-test-1
The argument no-adjust-repos is required here to ensure that ceph-deploy doesn’t install the latest version of ceph, and keeps using the repos defined at the start.
- This step creates the ceph monitor
ceph-deploy mon create-initial
I had some issues running this on a fully updated Jessie installation. If you get an error about starting services, edit /usr/lib/python2.7/dist-packages/ceph_deploy/hosts/debian/__init__.py and comment out lines 27, 28,29 and run the command again. This is due to the system looking for a systemd init script when its actually still sysvinit.
- We can now create our osd
mkdir /osd
ceph-deploy osd create ceph-test-1:/osd
ceph-deploy osd activate ceph-test-1:/osd
I’m using here a normal directory, you could also choose to use a block device, but for testing this is usually more than enough.
- To check that everything worked you can now run the following commands
ceph health
ceph status
Health should be OK, and there should be one osd up.
- If you need to restart the entire process, run the following commands
ceph-deploy purge ceph-test-1
ceph-deploy purgedata ceph-test-1
ceph-deploy forgetkeys
rm -rf /srv/ceph/osd /osd
- To create and map an rbd you can use the following
rbd create test –size 4096 -m ceph-test-1
rbd map foo –name client.admin
You should now be able to format and mount the rbd as you would with any block device.
- Current rbd mappings can be listed with the following command
rbd showmapped
References