Gitlab – 502 Bad Gateway Error Troubleshooting

Lets do:

In a perfect world GitLab would now be running perfectly. Unfortunately, GitLab has surprisingly high memory requirements, so on 512MB VPSs it often chokes on the first sign in. This is because GitLab uses a lot of memory on the very first login. Since the Ubuntu 12.04 VPS has no swap space when the memory is exceeded parts of GitLab get terminated. Needless to say GitLab does not run well when parts of it are being unexpectedly terminated.

The easiest solution is just to allocate more memory to your VPS, at least for the first sign in. If you don’t want to do that, another option is to increase swap space. DigitalOcean already has a full tutorial on how to do this available here (although I would recommend adding more than just 512MB of swap). The quick fix is to run the following:

sudo dd if=/dev/zero of=/swapfile bs=1024 count=1024k
sudo mkswap /swapfile
sudo swapon /swapfile

Your swapfile is now running and active, but to set it so that it’s activated on each boot we need to edit/etc/fstab:

sudo nano /etc/fstab

Paste the following onto the bottom of the file:

/swapfile       none    swap    sw      0       0 

Now restart your VPS:

sudo restart

Wait a minute or two for your VPS to reboot, and then try GitLab again. If it doesn’t work the first time, refresh the Bad Gateway page a couple of times, and you should soon see the GitLab login page.

References:

  1. Check out the excellent Gitlab installation documentation here.
  2. And for info on the 502 bad gateway error, check this thread.

Docker – Command line cheat sheet

Early post Docker for beginners ,we talked about how to start with dockers

Here are few basic commands which we need on daily base work case with docker

Useful Docker Commands helpful for operation

1) To Build Docker image from Dockerfile

      # docker build -t thegeeklinux:1.0 .
-t for Image name and tag name:tag (If you will not give tag it will tag latest)

2) To list images :
      # docker images

3) To save docker image in tar file :
      # docker save -o /tmp/thegeeklinux.tar thegeeklinux:1.0

4) To delete docker image :
      # docker rmi -f thegeeklinux:1.0
-f forcefully

5) To load docker image from tarball :
      # docker load < /tmp/thegeeklinux.tar

6) To tag docker image :
      # docker tag thegeeklinux:1.0 localhost:5000/thegeeklinux:1.0

NOTE:localhost:5000 should be your private registry ip and port

7) To push image to private or public registry server
      # docker push localhost:5000/thegeeklinux:1.0

NOTE: replace localhost with your private registry hostname or ip.

8) To pull docker image for private or public registry

a) To pull image from private registry server
         # docker pull localhost:5000/thegeeklinux:1.0

NOTE :replace localhost with your private registry hostname or ip.

b) To pull image from public registry
         # docker pull ubuntu

Docker Container Operation

1) To RUN Docker container :
      # docker run -it thegeeklinux:1.0
-i interactive mode
-t attract terminal

      # docker run -d --name thegeeklinux -p 80:80 -h tgl -l "com.thegeeklinux=1.0" -e WORKDIR=/opt/thegeeklinux thegeeklinux:1.0

-d Detach it will run container in backgraund.
-e set environment for container
-h Hostname you can give hostname to container also
-l we can give label also to container for indentification.
-p we can open port to host from container
–name Name of the container

2) To Run command inside docker container
      # docker exec thegeeklinux ls

3) Login to container without ssh :
      # docker exec -it thegeeklinux bash

4) To check container list :
      # docker ps

5) To check container memory and cpu.
      # docker stats thegeeklinux

6) To copy contant from container to host machine or host machine to container
      # docker cp ./thegeeklinux.war tgl:/opt/tomcat/current/webapps/

7) To check container related logs :
      # docker logs -f thegeeklinux
-f It will follow log output

8) To commit container :
      # docker commit -p -a "thegeeklinux@gmail.com" -m "commit message" tgl tgl:2.0
-a Author of the container
-m commit message
-p pause container during commit

9) To export container in compressed file
      # docker export -o /tmp/dockerexport.tar thegeeklinux

NOTE : export command will not export attached volume.

10) To get container information
      # docker inspect thegeeklinux

NOTE: it will give info in json format

To get ip addres of container
       # docker inspect -f {{.NetworkSettings.IPAddress}} thegeeklinux

To get mapped port to host :
       # docker inspect -f '{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}' thegeeklinux

11) To Stop container:
      # docker stop thegeeklinux

12) To start container :
      # docker start thegeeklinux

13) To get container processes
      # docker top thegeeklinux

14) To rename the container :
      # docker rename thegeeklinux tgl

15) To restart container :
      # docker restart docker-scripts

16) To delete container :
      # docker rm -vf thegeeklinux
-v delete volume also
-f delete container forcely

Ceph with devstack – part-1

Today going to see how to integrate ceph with devstack and mapping ceph as backend for nova, glance, cinder.

Ceph is a massively scalable, open source, distributed storage system. Ceph is in the Linux kernel and is integrated with the OpenStack cloud operating system.

 

Setup Dev Environment

Install OS-specific prerequisites:

sudo apt-get update
sudo apt-get install -y python-dev libssl-dev libxml2-dev \
                        libmysqlclient-dev libxslt-dev libpq-dev git \
                        libffi-dev gettext build-essential

Exercising the Services Using Devstack

This session has only been tested on Ubuntu 14.04 (Trusty), if you don’t have create on Virtual box with 4GB RAM, 100 GB HDD.

Clone devstack:

# Create a root directory for devstack if needed
sudo mkdir -p /opt/stack
sudo chown $USER /opt/stack

git clone https://git.openstack.org/openstack-dev/devstack /opt/stack/devstack

We will run devstack with minimal local.conf settings required to enable ceph plugin along with nova & heat, disable tempest, horizon which may slow down other services  here your localrc file

#[[local|localrc]]
#######
# MISC #
########
ADMIN_PASSWORD=admin
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
#SERVICE_TOKEN = <this is generated after running stack.sh>
# Reclone each time
#RECLONE=yes
# Enable Logging
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
#################
# PRE-REQUISITE #
#################
ENABLED_SERVICES=rabbit,mysql,key
#########
## CEPH #
#########
enable_plugin devstack-plugin-ceph https://github.com/openstack/devstack-plugin-ceph
# DevStack will create a loop-back disk formatted as XFS to store the
# Ceph data.
CEPH_LOOPBACK_DISK_SIZE=10G
# Ceph cluster fsid
CEPH_FSID=$(uuidgen)
# Glance pool, pgs and user
GLANCE_CEPH_USER=glance
GLANCE_CEPH_POOL=images
GLANCE_CEPH_POOL_PG=8
GLANCE_CEPH_POOL_PGP=8
# Nova pool and pgs
NOVA_CEPH_POOL=vms
NOVA_CEPH_POOL_PG=8
NOVA_CEPH_POOL_PGP=8
# Cinder pool, pgs and user
CINDER_CEPH_POOL=volumes
CINDER_CEPH_POOL_PG=8
CINDER_CEPH_POOL_PGP=8
CINDER_CEPH_USER=cinder
CINDER_CEPH_UUID=$(uuidgen)
# Cinder backup pool, pgs and user
CINDER_BAK_CEPH_POOL=backup
CINDER_BAK_CEPH_POOL_PG=8
CINDER_BAKCEPH_POOL_PGP=8
CINDER_BAK_CEPH_USER=cinder-bak
# How many replicas are to be configured for your Ceph cluster
CEPH_REPLICAS=${CEPH_REPLICAS:-1}
# Connect DevStack to an existing Ceph cluster
REMOTE_CEPH=False
REMOTE_CEPH_ADMIN_KEY_PATH=/etc/ceph/ceph.client.admin.keyring
#####################
## GLANCE – IMAGE SERVICE #
###########################
ENABLED_SERVICES+=,g-api,g-reg
##################################
## CINDER – BLOCK DEVICE SERVICE #
##################################
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
CINDER_DRIVER=ceph
CINDER_ENABLED_BACKENDS=ceph
###########################
## NOVA – COMPUTE SERVICE #
###########################
ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-cond,n-sch,n-net
#EFAULT_INSTANCE_TYPE=m1.micro
#Enable heat services
ENABLED_SERVICES+=,h-eng h-api h-api-cfn h-api-cw
#Enable Tempest
#ENABLED_SERVICES+=tempest’ inside ‘local.config

Now run

~/devstack$ ./stack.sh

Devstack will clone with master & ceph will be enabled & mapped as backend for cinder, glance & nova with PG pool size 8, can create own size in multiples of 2 power like 64 as your wish.

Sit back a while to clone devstack and get result as like below

 

=========================
DevStack Component Timing
=========================
Total runtime 2169
run_process 26
apt-get-update 52
pip_install 99
restart_apache_server 5
wait_for_service 20
apt-get 1653
=========================
This is your host IP address: 10.0.2.15
This is your host IPv6 address: ::1
Keystone is serving at http://10.0.2.15/identity/
The default users are: admin and demo
The password: admin

 

Check the health of ceph with root permission, see “HEALTH_OK”

pandy@malai:~/devstack$ sudo ceph -s
cluster 6f461e23-8ddd-4668-9786-92d2d305f178
health HEALTH_OK
monmap e1: 1 mons at {malai=10.0.2.15:6789/0}
election epoch 1, quorum 0 malai
osdmap e16: 1 osds: 1 up, 1 in
pgmap v24: 88 pgs, 4 pools, 33091 kB data, 12 objects
194 MB used, 7987 MB / 8182 MB avail
88 active+clean

Here you go, ceph is installed with devstack

 

Ceph Single Node Setup – Part I

ceph

Ceph’s software libraries provide client applications with direct access to the RADOS object-based storage system, and also provide a foundation for some of Ceph’s advanced features, including RADOS Block Device (RBD), RADOS Gateway, and the Ceph File System.

In this article, we going to see about single node setup of Ceph along with RADOS gateway, where MDS, CephFS are not practicing with openstack setup. Part I discuss about ceph installation and configuration on single node.

Lab Setup

We are going to do setup on top of virtual box by mounting three SATA HD for setting up Ceph, at the end of setup we will have

  • 1 Mon
  • 3 OSDs

 

Pre-requisites

 

Create VM on top of virtual box see here click,                 with minimum RAM size of  2 GB with minimum 100 GB hard disk, then create 3 SATA HD with decent size 25 GB each, see screenshot below (ceph -1.vdi, ceph-2.vdi, ceph-3.vdi)

 11111

Ceph Installation

Install Ceph repo key

wget -q -O- ‘https://download.ceph.com/keys/release.asc&#8217; | sudo apt-key add –

Add the Ceph (jewel release) repo to your Ubuntu sources list.

echo deb http://download.ceph.com/debian-jewel/ trusty main | sudo tee /etc/apt/sources.list.d/ceph.list

Update & Install Ceph

sudo apt-get update && sudo apt-get install ceph-deploy

Make sure your user account is having “sudo” permission, if not create passwordless sudo user for ceph  & Verify permission

sudo useradd -m -s /bin/bash ceph

sudo passwd cephecho “ceph ALL = (root) NOPASSWD:ALL” | sudo tee etc/sudoers.d/ceph

sudo chmod 0440 /etc/sudoers.d/ceph

Switch to the newly created user

sudo su – ceph

Create RSA keypair and copy it to same host, in case of multimode setup have to copy this on destination nodes

ssh-copy-id ceph@malai

Below is shell script which do the complete installation of ceph in single node, can be found from my git page click, it will do below action

# A very minimal ceph install script, using ceph-deploy
set -x

RELEASE=${1:debian-jewel}
# Creating a directory based on timestamp..not unique enough
mkdir -p ~/ceph-deploy/install-$(date +%Y%m%d%H%M%S) && cd $_

#Install ceph key
wget -q -O- ‘https://download.ceph.com/keys/release.asc&#8217; | sudo apt-key add –

#install ceph by pointing release repo to your Ubuntu sources list.
echo deb http://download.ceph.com/debian-jewel/ trusty main | sudo tee /etc/apt/sources.list.d/ceph.list

#Check & remove existing ceph setup
ceph-remove () {
ceph-deploy purge $HOST
ceph-deploy purgedata $HOST
ceph-deploy forgetkeys

#Ready to update & install ceph-deploy
sudo apt-get update && sudo apt-get install -y ceph-deploy

#Deploy ceph
HOST=$(hostname -s)
FQDN=$(hostname -f)
ceph-remove
ceph-deploy new $HOST

#Add below lines into ceph.conf, pool size for number of replicas of data
#Chooseleaf s required to tell ceph we are only a single node and that it’s OK to store the same copy of data on the same physical node
cat <<EOF >> ceph.conf
osd pool default size=2
osd crush chooseleaf type = 0
EOF
#Time to install ceph
ceph-deploy install $HOST

#Create Monitor
ceph-deploy mon create-initial

#Create OSD & OSD with mounted drives /dev/sdb /dev/sdc /dev/sdd
ceph-deploy osd prepare $HOST:sdb $HOST:sdc $HOST:sdd
ceph-deploy osd activate $HOST:/dev/sdb1 $HOST:/dev/sdc1 $HOST:/dev/sdd1

#Restribute config and keys
ceph-deploy admin $HOST

#Read permission to read keyring
sudo chmod +r /etc/ceph/ceph.client.admin.keyring

sleep 30

#Here we go, check ceph health
ceph -s

After running the above script, will get output as below

“health HEALTH_OK”

Say Bravo to yourself, done successfully.

In next part, will talk about Object storage gateway setup and configuration and later will map with Openstack Backend storage.