Friday, December 4, 2015

Using Ceph as the unified storage in DevStack

Overview

In this blog I wanted to cover the 2 different ways of configuring Ceph as the unified storage for DevStack:

  • hook-based approach (deprecated & to be removed soon, but good to know)
  • plugin-based approach (newly introduced and complies with devstack plugin policy)

Hook-based approach

This is the classic way of configuring DevStack to use Ceph.

In this approach, all the hook scripts that install/configure ceph and openstack services to use ceph as the storage backend, are present in DevStack repo itself.

These hook scripts are called by DevStack (as part of stack.sh) at appropriate times to install & initialize a ceph cluster, create openstack service specific ceph pools, and finally configure the openstack services to use the service specific respective ceph pool as the storage backend.

Thus, at the end of stack.sh you get a fully working local ceph cluster, with openstack service specific ceph pools acting as the storage for respective openstack services.

Following is an example of my localrc for this approach:

# Clone git repos, connect to internet if needed
RECLONE=True
OFFLINE=False

# Passwords for my setup
DATABASE_PASSWORD=abc123
RABBIT_PASSWORD=abc123
SERVICE_TOKEN=abc123
SERVICE_PASSWORD=abc123
ADMIN_PASSWORD=abc123

# I don't need a GUI/Dashboard
disable_service horizon

# Currently, not interested in Heat
disable_service heat
disable_service h-eng
disable_service h-api
disable_service h-api-cfn
disable_service h-api-cw

# Disable nova-network and tempest
disable_service n-net
disable_service tempest

# Enable neutron and its associated services
enable_service neutron
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta

# Enable ceph service, ask Cinder to use ceph backend
ENABLED_SERVICES+=,ceph
CINDER_ENABLED_BACKENDS=ceph

ENABLED_SERVICES causes DevStack's ceph hooks script to install ceph, create a ceph cluster and openstack service specific ceph pools.

CINDER_ENABLED_BACKENDS tells DevStack to invoke Cinder's ceph backend script to configure ceph for Cinder

Nova and Glance doesn't have a backend specific scripts, so they are configured directly by DevStack's ceph hook script

With the above localrc, running stack.sh should get you a basic openstack setup, that uses Ceph as the storage backend for Nova, Glance, & Cinder services

Plugin-based approach

This is a new way of configuring DevStack to use ceph

In this approach, all the plugin scripts that install/configure ceph and openstack services to use ceph as the storage backend, are present outside of the DevStack repo, in a plugin-specific repo.

For ceph, the repo is

https://github.com/openstack/devstack-plugin-ceph

Like hook scripts, the plugin scripts are called by DevStack (as part of stack.sh) at appropriate times to install & initialize a ceph cluster, create openstack service specific ceph pools, and finally configure the openstack services to use the service specific ceph pool as the storage backend.

Thus, at the end of `stack.sh you get a fully working local ceph cluster, with openstack service specific ceph pools acting as the storage for respective openstack services.

This is better than hook-based approach because:

  • Plugin/Backend changes are not carried by DevStack repo, so DevStack becomes lean & better manageable
  • Provides an API abstraction / contract between DevStack and plugin repo, thus providing a modular and supportable plugin model
  • Changes to plugin/backend can happen independently of DevStack, thus both can evolve independently & as long as the API contract is maintained, its guaranteed to work
  • Changes to the plugin repo can be CI'ed (CI = Continuous Integration) at the plugin repo itself (instead of DevStack) thus ensuring that the plugin change doesn't harm / break DevStack for that plugin/backend. Inversely, this also means that changes to DevStack doesn't have to worry about different plugins as the changes to them are gated by the plugin's respective CI job(s)

Following is an example of my localrc for this approach:

# Clone git repos, connect to internet if needed
RECLONE=True
OFFLINE=False

# Passwords for my setup
DATABASE_PASSWORD=abc123
RABBIT_PASSWORD=abc123
SERVICE_TOKEN=abc123
SERVICE_PASSWORD=abc123
ADMIN_PASSWORD=abc123

# I don't need a GUI/Dashboard
disable_service horizon

# Currently, not interested in Heat
disable_service heat
disable_service h-eng
disable_service h-api
disable_service h-api-cfn
disable_service h-api-cw

# Disable nova-network and tempest
disable_service n-net
disable_service tempest

# Enable neutron and its associated services
enable_service neutron
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta

# Enable ceph DevStack plugin
enable_plugin devstack-plugin-ceph git://git.openstack.org/openstack/devstack-plugin-ceph

Unlike hook approach, all that is needed in localrc is the enable_plugin ... line, no need to set/override any DevStack env vars!

Looks a lot more logical, neat and modular, isn't it ? :)

With the above localrc, running stack.sh should get you a basic openstack setup, that uses Ceph as the storage backend for Nova, Glance, & Cinder services

Summary

This blog was aimed at providing steps required to quickly get a DevStack setup with ceph as the unified storage backend. I hope to have done justice to that :)

Feel free to provide comments / questions and I will do my best to answer

A more detailed writeup on how devstack plugin actually works, its structure, the hooks it has, is out of scope of this blog, maybe I will write another blog for it :)

3 comments:

  1. I am trying to install devstack(liberty)
    by clonning as:
    git clone https://github.com/openstack-dev/devstack.git -b stable/liberty


    Then install ceph by using the following plugin in localrc file:
    # Enable ceph DevStack plugin
    enable_plugin devstack-plugin-ceph git://git.openstack.org/openstack/devstack-plugin-ceph

    And it comes up fine
    The problem is that when I reboot the server, I lost all ceph configuration.

    All my ceph commands stop working and I am getting the following errors:

    adminx@cephcontrail:~$ sudo ceph status
    2016-03-17 15:55:55.489590 7fa34c7c8700 0 -- :/3530400219 >> 192.168.57.64:6789/0 pipe(0x7fa34805d050 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa348059c50).fault
    2016-03-17 15:55:58.489293 7fa34c6c7700 0 -- :/3530400219 >> 192.168.57.64:6789/0 pipe(0x7fa33c000c00 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa33c004ef0).fault
    ^CError connecting to cluster: InterruptedOrTimeoutError
    adminx@cephcontrail:~$
    adminx@cephcontrail:~$
    adminx@cephcontrail:~$
    adminx@cephcontrail:~$ sudo ceph mon stat
    2016-03-17 15:56:05.009688 7fa050226700 0 -- :/3529225905 >> 192.168.57.64:6789/0 pipe(0x7fa04c05d050 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa04c059c90).fault
    2016-03-17 15:56:08.010524 7fa050125700 0 -- :/3529225905 >> 192.168.57.64:6789/0 pipe(0x7fa040000c00 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fa040004ef0).fault
    ^CError connecting to cluster: InterruptedOrTimeoutEr



    The following file system disappears from mount
    adminx@cephos2:~/devstack$ sudo mount | grep ceph


    Before reboot, I was getting the following output

    /var/lib/ceph/drives/images/ceph.img on /var/lib/ceph type xfs (rw,noatime,nodiratime,nobarrier,logbufs=8)

    And all the following ceph monitor and osd related files disappear after the reboot:

    Before reboot, I was getting the following output:

    adminx@cephos2:~/devstack$ ls -lrt /var/lib/ceph/
    total 0
    drwxr-xr-x 2 root root 6 Mar 16 12:01 radosgw
    drwxr-xr-x 2 root root 6 Mar 16 12:01 mds
    drwxr-xr-x 2 root root 32 Mar 16 12:01 tmp
    drwxr-xr-x 3 root root 25 Mar 16 12:01 mon
    drwxr-xr-x 2 root root 25 Mar 16 12:01 bootstrap-osd
    drwxr-xr-x 2 root root 25 Mar 16 12:01 bootstrap-rgw
    drwxr-xr-x 2 root root 25 Mar 16 12:01 bootstrap-mds
    drwxr-xr-x 3 root root 19 Mar 16 12:01 osd

    adminx@cephos2:~/devstack$ ls -lrt /var/lib/ceph/mon/ceph-cephos2/
    total 4
    -rw-r--r-- 1 root root 77 Mar 16 12:01 keyring
    -rw-r--r-- 1 root root 0 Mar 16 12:01 upstart
    drwxr-xr-x 2 root root 128 Mar 16 12:01 store.db

    adminx@cephos2:~$ ls -lrt /var/lib/ceph/osd
    total 0
    drwxr-xr-x 3 root root 163 Mar 16 12:01 ceph-0
    adminx@cephos2:~$
    adminx@cephos2:~$ ls -lrt /var/lib/ceph/mon
    total 0
    drwxr-xr-x 3 root root 49 Mar 16 12:01 ceph-cephos2
    adminx@cephos2:~$
    adminx@cephos2:~$
    adminx@cephos2:~$ ls -lrt /var/lib/ceph/mds/
    total 0
    adminx@cephos2:~$ ls -lrt /var/lib/ceph/osd/ceph-0/
    total 102436
    -rw-r--r-- 1 root root 53 Mar 16 12:01 superblock
    -rw-r--r-- 1 root root 4 Mar 16 12:01 store_version
    -rw-r--r-- 1 root root 37 Mar 16 12:01 fsid
    -rw-r--r-- 1 root root 2 Mar 16 12:01 whoami
    -rw-r--r-- 1 root root 6 Mar 16 12:01 ready
    -rw-r--r-- 1 root root 21 Mar 16 12:01 magic
    -rw-r--r-- 1 root root 37 Mar 16 12:01 ceph_fsid
    -rw-r--r-- 1 root root 56 Mar 16 12:01 keyring
    -rw-r--r-- 1 root root 0 Mar 16 12:01 upstart
    drwxr-xr-x 92 root root 4096 Mar 16 12:31 current
    -rw-r--r-- 1 root root 104857600 Mar 17 15:41 journal
    adminx@cephos2:~$
    adminx@cephos2:~$
    adminx@cephos2:~$ ls -lrt /var/lib/ceph/mon/ceph-cephos2/
    total 4
    -rw-r--r-- 1 root root 77 Mar 16 12:01 keyring
    -rw-r--r-- 1 root root 0 Mar 16 12:01 upstart
    drwxr-xr-x 2 root root 230 Mar 17 16:00 store.db



    After reboot, all the above files are gone


    It seems like that I need to modify my /etc/fstab file to make it persistence and make some other ceph related changes so that it stays after the reboot

    Would you please suggest anything to make it persistence?

    ReplyDelete
    Replies
    1. Hi Imran,
      Yes, try with /etc/fstab to do automount post reboot, but let me be honest, I never tried this setup across reboots. Devstack setup is a developers setup and I always login and start from scratch. I am not sure if devstack setup itself is retained post reboot, since the openstack services aren't persisted in the system... so it may take a lot more than just fstab entry addition to make it work post reboot.

      Delete
    2. A late thought, post reboot, did you try doing ./rejoin-stack.sh, i think thats the right way to resume a devstack setup, and if all goes well, you should have ur pre-reboot setup at the end of rejoin.

      Delete