OpenStack Ceph integration

Tweet about this on Twitter0Share on Facebook1Share on Google+0Share on LinkedIn0

Maybe most interesting part about Ceph is integration with OpenStack. So I will show that before moving forward to Ceph RadosGW. Before going through below instructions of OpenStack Ceph integration I assume that you have Ceph cluster up and running (Bobtail -> ) and OpenStack (Folsom -> ). Ceph uses block devices with OpenStack through libvirt which configures the QEMU interface to librbd. Two parts of OpenStack integrate with Ceph’s block devices:

  • Images: OpenStack Glance manages images for VMs
  • Volumes: OpenStack uses volumes to boot VMs, or to attach volumes to running VMs

Ceph cluster

Create pools for volumes and images:

Increase replication level for both pools:

Now create Ceph clients for both pools along with keyrings:

Copy above keyrings to glance-api and cinder-volume nodes along with ceph.conf file at /etc/ceph directory. Hosts running nova-compute do not need the keyring. Instead, they store the secret key in libvirt. To create libvirt secret key you will need key from client.volumes.keyring. You can get that key with:

Copy this key to nova-compute node wherever you want. After setup is done you can remove it. Also for installation of Ceph components you will need Ceph’s official repository (Ubuntu):

After you copy ceph.conf from Ceph cluster to glance-api and cinder-volume nodes update it with keyring paths:

Glance

On the glance-api host, you’ll need the Python bindings for librbd:

Update your glance-api configuration file (/etc/glance/glance-api.conf):

And restart glance-api service:

Nova compute

Create temporary secret.xml file:

Generate secret from created secret.xml file:

Set libvirt secret using above key:

Write key somewhere so you can add it to more compute nodes with above command. Also you will need this key for cinder configuration. Restart nova-compute:

Upgrade librbd to latest version from Ceph repository. Ubuntu comes with old version which will not work:

Cinder volume

On the cinder-volume host, install the client command line tools:

Update cinder configuration (/etc/cinder/cinder.conf):

After “stop on runlevel [!2345]” add following to cinder startup script configuration file (/etc/init/cinder-volume.conf):

Restart cinder-volume service:

After all is done you should be able to store glance images, create volumes, attach them to running machines or boot machines from volumes. Also Ceph is compatible with swift API, so you can put objects through horizon directly to Ceph. More about that after I cover RadosGW.

Tweet about this on Twitter0Share on Facebook1Share on Google+0Share on LinkedIn0
Posted in Cloud and tagged , .

Alen Komljen

I'm a DevOps/Cloud engineer with experience that spans a broad portfolio of skills, including cloud computing, software deployment, process automation, shell scripting and configuration management, as well as Agile development and Scrum. This allowed me to excel in solving challenges in cloud computing, and the entire IT infrastructure along with my deep interest in OpenStack, Ceph, Docker and the open-source community.

  • pjimenezsolis

    Thanks for this marvelous article, Alen.

    I would add the chance for multiple Compute Nodes, so the secret XML output should be the same for all the Compute Nodes. It is recommended to set the uuid manually (see https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1065883 ).


    cat > secret.xml << EOF

    client.volumes secret

    85d91eb7-8a48-0420-eb70-89297f148416

    EOF

    Generate secret from created secret.xml file:

    virsh secret-define --file secret.xml

    Set libvirt secret using above key:

    virsh secret-set-value --secret 85d91eb7-8a48-0420-eb70-89297f148416
    --base64 $(cat {path_to_client.volumes})

    Regards.

    • Alen Komljen

      Thanks and great tip as I didn’t see this before.

  • Eleftheria Kiourtzoglou

    Hello Mr Komljen,

    Nice blog! Is there an email address I can contact you in private?

    Thanks,

    Eleftheria Kiourtzoglou

    Head of Editorial Team

    Java Code Geeks

    email: eleftheria[dot]kiourtzoglou[at]javacodegeeks[dot]com

    • Alen Komljen

      Thanks, you can send to: info[at]techbar[dot]me

  • ramonskie

    did you got radosgw and keystone working yet?

    i’m almost there but can’t seem to use a user from keystone to create buckets or put objects. i get a permission denied

    i can create buckets in horizon but not put objects
    this is a bug https://bugs.launchpad.net/horizon/+bug/1200534

    • Alen Komljen

      Yes I am. You can use this fix until bug is resolved:

      – Go to file: /usr/share/openstack-dashboard/openstack_dashboard/api/swift.py and put one line (content_length=object_file.size) inside this def:

      def swift_upload_object(request, container_name, object_name, object_file):
      headers = {}
      headers[‘X-Object-Meta-Orig-Filename’] = object_file.name
      etag = swift_api(request).put_object(container_name,
      object_name,
      object_file,
      + content_length=object_file.size,
      headers=headers)
      obj_info = {‘name’: object_name, ‘bytes’: object_file.size, ‘etag’: etag}
      return StorageObject(obj_info, container_name)

      • ramonskie

        yes i did this and it now works from my openstack dashboard 🙂

        what i can’t get to work is to use the swift command line tool to create or upload something

        it gives my a permission error
        so my quess is that i use the wrong credentials
        you need to use your username and ec2 key right?

        • Alen Komljen

          I tried to delete files with swift client and that worked, but I didn’t try to upload something through command line tool. I used same credentials as for other openstack command line tools.

  • Virusgunz

    so it hasnt been totally clear with me so the way i get it is you start with a open stack controller node and a openstack compute node (so you do not need the openstack block and object storage ndoe right?) and besides thta you just build a ceph cluster am i seeing this correct ?

    • ramonskie

      on the controller node you need to put cinder and glance
      and in glance/cinder config you need to set the ceph configs
      see ceph documentation

      • Virusgunz

        ahh so you make a full ceph cluster following the ceph documentation and then you set the cephcluster config in the glance/cinder config