Configuring Ceph storage

Last updated 28 February, 2019

Configuring Ceph with HPE OneSphere KVM 

NOTE: The following topics describe how to integrate Ceph with HPE OneSphere. These topics do not include information about the operational management of Ceph. Therefore, support for Ceph depends on the arrangements made for your storage products and is not provided through HPE OneSphere

A Ceph block storage device is a virtual disk that can be attached to bare metal Linux based servers or virtual machines. To configure Ceph with HPE OneSphere, you must successfully deploy SUSE Enterprise Storage 5, which supports the Ceph 12.2.x version. This version of Ceph is also referred to as luminous.

By default, the OpenStack Cinder volume service and Ceph related drivers are installed on the KVM node after it is connected to HPE OneSphere.

After a successful deployment of Ceph, you can onboard Ceph Rados Block Device (RBD) into your HPE OneSphere KVM private cloud. To configure RBD to a KVM instance, follow the instructions in the following topics.

Configuring Ceph 


The administrator:

  1. Log in as a root user on the master installation node.
  2. Check the health status of the Ceph cluster.
    # ceph health

    If there are any health warnings for the clusters, resolve the issue(s) before proceeding further.

  3. Execute # ceph status to get the total number of Object Storage Daemons (OSD) in your deployment.

    In the following example, there are 84 OSDs.

    #  ceph status
        id:     9bc83efb-3a68-3cd4-aa3c-d4ceac81c97a
        health: HEALTH_OK
        mon: 3 daemons, quorum eskimo002,eskimo003,eskimo005
        mgr: eskimo005(active), standbys: eskimo003, eskimo002
        osd: 84 osds: 84 up, 84 in
        pools:   0 pools, 0 pgs
        objects: 0 objects, 0 bytes
        usage:   90610 MB used, 137 TB / 137 TB avail
  4. Create a onesphere-pool pool.
    # ceph osd pool create onesphere-pool {Suggested PG Count}

    A Ceph pool spans multiple OSDs. Each OSD manages a physical disk. By default the pool spans all the OSDs in the Ceph system. Moreover, when you create a Ceph pool you must set the number of placement groups to a reasonable value. For information about Ceph pool configuration, see "Create a Pool" in the Ceph Documentation.

    A Placement Groups (PGs) calculator helps you to calculate the number of placement groups appropriate for your configuration. For details on determining a suitable number, see "Placement Groups" in the Ceph Documentation.

    For example, on the placement group calculator, select OpenStack on the Ceph Use Case Selector. Then set the number of OSDs for each pool to the number of OSDs in your deployment, and set Target PGs per OSD to 200 as per the instructions on the calculator tool.

    In HPE OneSphere, you can set a high value for %Data to the onesphere-pool pool because it is a default backend used for initial onboarding. You can use the Suggested PG Count for the onesphere-pool pool to create your onesphere-pool pool.

  5. Execute the following command to list the cluster's pools.
    # ceph osd lspools
  6. Initialize the pool for use by Rados Block Device.
    # rbd pool init onesphere-pool
  7. Create a Ceph user for cinder with the appropriate privileges.
    # ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd 
    	key = AQAHX31aGuyOOBAAk1I2XFOsjDN01wxEteFK/g==

Updating Ceph client nodes 

There are two types of nodes in HPE OneSphere that operate as a Ceph client: those hosting the block storage service and those with the hypervisor service. There is only one node with the block storage service.

HPE OneSphere is validated with SUSE Enterprise Storage 5 based on 12.2.x. Ceph client libraries are required to match this version. For more information about obtaining and installing Ceph packages, see Ceph Installation Manual. 


 If you want to retain the default version of the Ceph client components installed by your OS distribution, you can skip the following sections.

Perform the following procedure to update Ceph client nodes.

Updating Ceph client on CentOS 

  1. Execute ceph version to validate the expected version of Ceph.
  2. Create the /etc/yum.repos.d/ceph.repo file in your CentOS machine where the Ceph client is running.
  3. Enter the following details in the ceph.repo file.
    name=Ceph packages for $basearch
    name=Ceph noarch packages

    The ceph.repo file uses the luminous repository and the el7 distro. If you want to use different repository or distro, modify the file as required. For more information see Ceph documentation.

  4. If your client system is running CentOS, install EPEL (Extra Packages for Enterprise Linux). This helps to resolve the installation dependencies for the latest Ceph packages. For more information on the EPEL, see EPEL wiki .
    # yum install epel-release
  5. Execute the following command to install the latest ceph-common library.
    # yum install ceph-common

Updating Ceph client on Ubuntu 

The following procedure describes the update process of the Ceph client host on Ubuntu.

  1. Execute ceph version to validate the expected version of Ceph.
  2. Add the luminous Ceph repository.
    # apt-add-repository "deb  
      $(lsb_release -sc) main"
    # apt-get update
    # apt-get install ceph-common

Configuring Ceph clients 

You must configure two classes of nodes as a Ceph client.
  • Hypervisor node: The hypervisor hosts communicate with Ceph to attach the Rados Block Device (RBD) volumes to virtual machines running in the hypervisor.

  • Block storage node: The Block Storage host communicate with Ceph to create and delete RBD volumes.


The administrator:

  1. Copy the /etc/ceph/ceph.conf file from one of the Ceph nodes to all of the client nodes. Ensure that the global section of ceph.conf file has similar values on both the client nodes and Ceph nodes.

    An example of the global section of the ceph.conf file is shown below:

    fsid = bb9d33fe-c330-419c-8a7d-6430743e3e31
    # Monitor information
    mon initial members = ceph02.ncs-ceph,ceph01.ncs-ceph,ceph03.ncs-ceph
    mon host = ceph02.ncs-ceph,ceph01.ncs-ceph,ceph03.ncs-ceph
    mon addr =,,
  2. (Block storage client node) Execute the following command on the Ceph node to copy the client key from a host on the Ceph cluster to your block storage client node.
    # ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} 
      sudo tee /etc/ceph/ceph.client.cinder.keyring 


    Ensure that ceph.client.cinder.keyring is secured, because it holds security credentials.

    Installing the cinder-common package helps to confirm the establishment of communication with the Ceph cluster from the client node. For example, you can get the Ceph status as shown below.

    # ceph --user cinder status
        id:     9bc83efb-3a68-3cd4-aa3c-d4ceac81c97a
        mon: 3 daemons, quorum eskimo002,eskimo003,eskimo005
        mgr: eskimo005(active), standbys: eskimo003, eskimo002
        osd: 84 osds: 84 up, 84 in
        pools:   1 pools, 8192 pgs
        objects: 4 objects, 35 bytes
        usage:   91020 MB used, 137 TB / 137 TB avail
        pgs:     8192 active+clean 


    Specify that you are running as the cinder user.

  3. Validate the connectivity of the cinder user to the onesphere-pool.
    # rbd --user cinder --pool onesphere-pool ls
  4. Generate a UUID for your hypervisors.
    # uuidgen


    The UUID is generated only once in any node and the same UUID is used across all the nodes in the system.

  5. Run the following command on all client nodes that are running the hypervisor service. On these client nodes, import the Ceph client key into libvirt configuration so that libvirt can attach Ceph volumes to virtual machine running on the hypervisor nodes. For more information, see rbd documentation.
    1. Create an XML file defining the libvirt secret and import that file to the libvirt
      # cat > secret.xml <<EOF
      <secret ephemeral='no' private='no'>
        <usage type='ceph'>
          <name>client.cinder secret</name>
      # virsh secret-define --file secret.xml
    2. Execute the following command on a Ceph node to copy the key to all the client nodes that are running the hypervisor services.
      # ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
    3. On the client node, set the value of the libvirt secret (the output of the previous command) to be the cinder client key.
      # virsh secret-set-value --secret {your-uuid-value} --base64 
      $(cat client.cinder.key) && rm client.cinder.key secret.xml

Authorizing the client hosts and configuring post-authentication

  1. Log in to HPE OneSphere.
  2. From the HPE OneSphere main menu, select Providers.
  3. From the Providers screen, select Private Zones.
  4. Select the zone that contains connected KVM servers.
  5. Select the Update Zone link.
  6. Determine the host where the block storage service has been applied, from the Update Zone screen, select the Block storage drop-down list under Enable services, and select the server.


    The block storage (onesphere-pool) service must run on only one server/node.

  7. Click Update Zone.

    You will be able to view the authorized block storage service for the selected server.

  8. Update the permissions and ownership on the Ceph configuration files to allow HPE OneSphere services to access them. Executing the following commands restricts access for other services.
    # user_id=`ps auxww | grep ostackhost | grep -v grep | awk '{print $1}'`
    # group_id=`id –gn ${user_id}`
    # chown -R {user_id}:{group_id} /etc/ceph/
    # chmod -R o-r /etc/ceph/*
  9. Navigate to the Private Cloud Control advanced console.
    1. Under the Infrastructure tab, click Hosts.
    2. Select the host you want to configure, then click Configure Host.
    3. Navigate to Block Storage.
    4. Provide the block storage details in the required fields.
    5. Click Update Block Storage Details.
  10. Install the Openstack CLI client. For more information, see Installing OpenStack CLI clients for HPE OneSphere.

    Note that you must source the appropriate OpenStack RC file for the target KVM private zone before making any OpenStack API calls through the OpenStack CLI.

  11. Create a volume type.
    # openstack volume type create --public <name of the volume type>

    In the following example, Ceph is the volume type that is created.

    #openstack volume type create --public ceph
    | Field       | Value                                |
    | description | None                                 |
    | id          | 8fa85091-315a-47c7-81a9-867562306bbc |
    | is_public   | True                                 |
    | name        | ceph                                 |
  12. Set the property to the volume type.
    # openstack volume type set --property volume_backend_name=<name of the backend> ceph

    In the following example, onesphere-volume is the name of the volume backend for the volume type Ceph.

    # openstack volume type set --property volume_backend_name='onesphere-volume' ceph

Verifying the operation of the block storage service 

  • Ceph is successfully installed and configured with HPE OneSphere KVM.


Verify that the block storage service is up and running by creating a volume and attaching a volume to an instance.

For more information, see the "Block Storage" section of the OpenStack Installation Guide for Ubuntu.