Ceph with devstack – part-1

Today going to see how to integrate ceph with devstack and mapping ceph as backend for nova, glance, cinder.

Ceph is a massively scalable, open source, distributed storage system. Ceph is in the Linux kernel and is integrated with the OpenStack cloud operating system.

 

Setup Dev Environment

Install OS-specific prerequisites:

sudo apt-get update
sudo apt-get install -y python-dev libssl-dev libxml2-dev \
                        libmysqlclient-dev libxslt-dev libpq-dev git \
                        libffi-dev gettext build-essential

Exercising the Services Using Devstack

This session has only been tested on Ubuntu 14.04 (Trusty), if you don’t have create on Virtual box with 4GB RAM, 100 GB HDD.

Clone devstack:

# Create a root directory for devstack if needed
sudo mkdir -p /opt/stack
sudo chown $USER /opt/stack

git clone https://git.openstack.org/openstack-dev/devstack /opt/stack/devstack

We will run devstack with minimal local.conf settings required to enable ceph plugin along with nova & heat, disable tempest, horizon which may slow down other services  here your localrc file

#[[local|localrc]]
#######
# MISC #
########
ADMIN_PASSWORD=admin
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
#SERVICE_TOKEN = <this is generated after running stack.sh>
# Reclone each time
#RECLONE=yes
# Enable Logging
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
#################
# PRE-REQUISITE #
#################
ENABLED_SERVICES=rabbit,mysql,key
#########
## CEPH #
#########
enable_plugin devstack-plugin-ceph https://github.com/openstack/devstack-plugin-ceph
# DevStack will create a loop-back disk formatted as XFS to store the
# Ceph data.
CEPH_LOOPBACK_DISK_SIZE=10G
# Ceph cluster fsid
CEPH_FSID=$(uuidgen)
# Glance pool, pgs and user
GLANCE_CEPH_USER=glance
GLANCE_CEPH_POOL=images
GLANCE_CEPH_POOL_PG=8
GLANCE_CEPH_POOL_PGP=8
# Nova pool and pgs
NOVA_CEPH_POOL=vms
NOVA_CEPH_POOL_PG=8
NOVA_CEPH_POOL_PGP=8
# Cinder pool, pgs and user
CINDER_CEPH_POOL=volumes
CINDER_CEPH_POOL_PG=8
CINDER_CEPH_POOL_PGP=8
CINDER_CEPH_USER=cinder
CINDER_CEPH_UUID=$(uuidgen)
# Cinder backup pool, pgs and user
CINDER_BAK_CEPH_POOL=backup
CINDER_BAK_CEPH_POOL_PG=8
CINDER_BAKCEPH_POOL_PGP=8
CINDER_BAK_CEPH_USER=cinder-bak
# How many replicas are to be configured for your Ceph cluster
CEPH_REPLICAS=${CEPH_REPLICAS:-1}
# Connect DevStack to an existing Ceph cluster
REMOTE_CEPH=False
REMOTE_CEPH_ADMIN_KEY_PATH=/etc/ceph/ceph.client.admin.keyring
#####################
## GLANCE – IMAGE SERVICE #
###########################
ENABLED_SERVICES+=,g-api,g-reg
##################################
## CINDER – BLOCK DEVICE SERVICE #
##################################
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
CINDER_DRIVER=ceph
CINDER_ENABLED_BACKENDS=ceph
###########################
## NOVA – COMPUTE SERVICE #
###########################
ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-cond,n-sch,n-net
#EFAULT_INSTANCE_TYPE=m1.micro
#Enable heat services
ENABLED_SERVICES+=,h-eng h-api h-api-cfn h-api-cw
#Enable Tempest
#ENABLED_SERVICES+=tempest’ inside ‘local.config

Now run

~/devstack$ ./stack.sh

Devstack will clone with master & ceph will be enabled & mapped as backend for cinder, glance & nova with PG pool size 8, can create own size in multiples of 2 power like 64 as your wish.

Sit back a while to clone devstack and get result as like below

 

=========================
DevStack Component Timing
=========================
Total runtime 2169
run_process 26
apt-get-update 52
pip_install 99
restart_apache_server 5
wait_for_service 20
apt-get 1653
=========================
This is your host IP address: 10.0.2.15
This is your host IPv6 address: ::1
Keystone is serving at http://10.0.2.15/identity/
The default users are: admin and demo
The password: admin

 

Check the health of ceph with root permission, see “HEALTH_OK”

pandy@malai:~/devstack$ sudo ceph -s
cluster 6f461e23-8ddd-4668-9786-92d2d305f178
health HEALTH_OK
monmap e1: 1 mons at {malai=10.0.2.15:6789/0}
election epoch 1, quorum 0 malai
osdmap e16: 1 osds: 1 up, 1 in
pgmap v24: 88 pgs, 4 pools, 33091 kB data, 12 objects
194 MB used, 7987 MB / 8182 MB avail
88 active+clean

Here you go, ceph is installed with devstack

 

Ceph Single Node Setup – Part I

ceph

Ceph’s software libraries provide client applications with direct access to the RADOS object-based storage system, and also provide a foundation for some of Ceph’s advanced features, including RADOS Block Device (RBD), RADOS Gateway, and the Ceph File System.

In this article, we going to see about single node setup of Ceph along with RADOS gateway, where MDS, CephFS are not practicing with openstack setup. Part I discuss about ceph installation and configuration on single node.

Lab Setup

We are going to do setup on top of virtual box by mounting three SATA HD for setting up Ceph, at the end of setup we will have

  • 1 Mon
  • 3 OSDs

 

Pre-requisites

 

Create VM on top of virtual box see here click,                 with minimum RAM size of  2 GB with minimum 100 GB hard disk, then create 3 SATA HD with decent size 25 GB each, see screenshot below (ceph -1.vdi, ceph-2.vdi, ceph-3.vdi)

 11111

Ceph Installation

Install Ceph repo key

wget -q -O- ‘https://download.ceph.com/keys/release.asc&#8217; | sudo apt-key add –

Add the Ceph (jewel release) repo to your Ubuntu sources list.

echo deb http://download.ceph.com/debian-jewel/ trusty main | sudo tee /etc/apt/sources.list.d/ceph.list

Update & Install Ceph

sudo apt-get update && sudo apt-get install ceph-deploy

Make sure your user account is having “sudo” permission, if not create passwordless sudo user for ceph  & Verify permission

sudo useradd -m -s /bin/bash ceph

sudo passwd cephecho “ceph ALL = (root) NOPASSWD:ALL” | sudo tee etc/sudoers.d/ceph

sudo chmod 0440 /etc/sudoers.d/ceph

Switch to the newly created user

sudo su – ceph

Create RSA keypair and copy it to same host, in case of multimode setup have to copy this on destination nodes

ssh-copy-id ceph@malai

Below is shell script which do the complete installation of ceph in single node, can be found from my git page click, it will do below action

# A very minimal ceph install script, using ceph-deploy
set -x

RELEASE=${1:debian-jewel}
# Creating a directory based on timestamp..not unique enough
mkdir -p ~/ceph-deploy/install-$(date +%Y%m%d%H%M%S) && cd $_

#Install ceph key
wget -q -O- ‘https://download.ceph.com/keys/release.asc&#8217; | sudo apt-key add –

#install ceph by pointing release repo to your Ubuntu sources list.
echo deb http://download.ceph.com/debian-jewel/ trusty main | sudo tee /etc/apt/sources.list.d/ceph.list

#Check & remove existing ceph setup
ceph-remove () {
ceph-deploy purge $HOST
ceph-deploy purgedata $HOST
ceph-deploy forgetkeys

#Ready to update & install ceph-deploy
sudo apt-get update && sudo apt-get install -y ceph-deploy

#Deploy ceph
HOST=$(hostname -s)
FQDN=$(hostname -f)
ceph-remove
ceph-deploy new $HOST

#Add below lines into ceph.conf, pool size for number of replicas of data
#Chooseleaf s required to tell ceph we are only a single node and that it’s OK to store the same copy of data on the same physical node
cat <<EOF >> ceph.conf
osd pool default size=2
osd crush chooseleaf type = 0
EOF
#Time to install ceph
ceph-deploy install $HOST

#Create Monitor
ceph-deploy mon create-initial

#Create OSD & OSD with mounted drives /dev/sdb /dev/sdc /dev/sdd
ceph-deploy osd prepare $HOST:sdb $HOST:sdc $HOST:sdd
ceph-deploy osd activate $HOST:/dev/sdb1 $HOST:/dev/sdc1 $HOST:/dev/sdd1

#Restribute config and keys
ceph-deploy admin $HOST

#Read permission to read keyring
sudo chmod +r /etc/ceph/ceph.client.admin.keyring

sleep 30

#Here we go, check ceph health
ceph -s

After running the above script, will get output as below

“health HEALTH_OK”

Say Bravo to yourself, done successfully.

In next part, will talk about Object storage gateway setup and configuration and later will map with Openstack Backend storage.