Openstack – Delete Cinder Stuck Volume

Recently, I was using Devstack/Ocata version of OpenStack and was trying to  attach/detach volumes to an instance. Every once in a while, volumes would go to an   ‘in-use’ state even after the instance was destroyed.

In fact, even in other releases, I have seen cinder volumes stuck in in-use or error state and sometimes not being able to delete those.

If the volume is in ‘in-use’ status, you first have to change to an available status though before you can issue a delete:

cinder reset-state –state available $VOLUME_ID

cinder delete $VOLUME_ID

If ‘cinder delete’ doesn’t work and you have admin privileges, you can try force-delete.

cinder force-delete $VOLUME_ID

But may be that will fix it. May be it will not. If the volume is still stuck, try going to the database and setting the status of the volume to a detached state:

update volume_attachment setattach_status="detached"where id="<attachment_id>";
update volumes setattach_status="detached"where id="<volume_id>";
Once I did that, I was able to delete or force-delete any stuck volumes. 🙂

Best way to install Openstack – Ocata ( Devstack) Quick Tips

Here we are going to see how to install devstack with some tricks to resolve unknown errors in devstack and local.conf file with stable branch version

Pre-requisites

  • Ubuntu 16.04 ( 14.04 is having compatibility issues with devstack requirements)
  • 8 GB RAM (minimum)

 

Steps:

  1. Clone devstack

git clone https://git.openstack.org/openstack-dev/devstack -b stable/ocata

2. Set permissions

sudo chown -R <username> devstack

sudo chmod 770 devstack

3. cd devstack, create local.conf file and enter details as below

[[local|localrc]]
ADMIN_PASSWORD=admin
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD

DEFAULT_VOLUME_GROUP_NAME=stack-volumes-default
PIP_UPGRADE=True
RECLONE=False
DEFAULT_INSTANCE_TYPE=m1.tiny

HOST_IP=<Localhost IP>

#Enable SENLIN
enable_plugin senlin https://git.openstack.org/openstack/senlin
Enable senlin-dashboard
enable_plugin senlin-dashboard https://git.openstack.org/openstack/senlin-dashboard

#Enable HEAT
enable_plugin heat https://git.openstack.org/openstack/heat stable/ocata

#Enable Aodh and ceilometer
CEILOMETER_BACKEND=mongodb
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer stable/ocata
enable_plugin aodh https://git.openstack.org/openstack/aodh stable/ocata

# Enable Gnocchi
enable_plugin gnocchi https://github.com/gnocchixyz/gnocchi stable/4.0
enable_service gnocchi-grafana,gnocchi-api,gnocchi-metricd

 

#Enable LBAAS V2
enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas stable/ocata
NEUTRON_LBAAS_SERVICE_PROVIDERV2=”LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default”
enable_service q-lbaasv2

#Enable LBAAS V2 Dashboard
enable_plugin neutron-lbaas-dashboard https://git.openstack.org/openstack/neutron-lbaas-dashboard stable/ocata

#Enable Ocatavia LBAAS v2 Driver
enable_plugin octavia https://git.openstack.org/openstack/octavia stable/ocata
ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api

# Enable Logging

LOGFILE=/opt/stack/logs/stack.sh.log
LOGDAYS=2
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs

4. Create file dev_set.sh and add entry as below

 

                     #!/bin/bash
git config –global url.”https://&#8221;.insteadOf git://
export no_proxy=127.0.0.1, <localhost IP>

save and run source dev_set.sh

 

Note: no_proxy will avoid keystone authentication with localhost IP, few might come across this issue.

 

Common Devstack unknown Errors and tricks to solve

  1. For keystone credentials, run dev_set.sh as mentioned above
  2. For “Could not satisfy constraints for ‘horizon’: installation from path or url cannot be constrained to a version” try “git reset –hard /opt/stack/requirements”, it occurs if do second+ ./stack.sh, easy way.
  3. While Enabling Gnocchi with stable version make sure “uuidgen” is installed else do apt-get install uuid-runtime
  4. To connect to screen, run screen -ls , you can see screen number, now to connect enter screen -r <screen number> and restart services if any changes needed
  5. For Error “[ERROR] /home/pandy/devstack/stackrc:747 Could not determine host ip address. See local.conf for suggestions on setting HOST_IP” open stackrc file and add HOST_IP=<localhost_IP> manually and run. it’s sounds crazy, but works.

Now do stack.sh and enjoy devstack installation, Pooh !! 

 

Tacker Installation Openstack

what is Tacker?

Tacker is an official OpenStack project building a Generic VNF Manager (VNFM) and a NFV Orchestrator (NFVO) to deploy and operate Network Services and Virtual Network Functions (VNFs) on an NFV infrastructure platform like OpenStack. It is based on ETSI MANO Architectural Framework and provides a functional stack to Orchestrate Network Services end-to-end using VNFs.

 

High Level ArchitectureETSI MANO Tacker.JPG

To know more about architecture  click Tacker

Installation on single node setup (Devstack)

1) Pull devstack repo either master or any stable releases ( do “git clone -b stable/<stable release name>

Note: supported from the Openstack Kilo Release.

git clone https://github.com/openstack-dev/devstack

3) A sample local.conf is placed at https://raw.githubusercontent.com/openstack/tacker/master/devstack/samples/local.conf.example. Copy the local.conf to devstack root directory and customize it based on your environment settings. Update the HOST_IP to the IP address of VM or host where you are running tacker.

Note 1: Ensure local.conf file has the “enable_plugin tacker” line and it is pointing to master.

4) Run stack.sh

Installation on Multinode setup:

Prerequisites:

  • Hardware: minimum 8GB RAM, Ubuntu (version 14.04)
  • Ensure that OpenStack components Keystone, Glance, Nova, Neutron, Heat and Horizon are installed.
  • Git & Python packages should be installed

sudo apt-get install python-pip git

Steps:

  1. Create Client environment source file

export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

 

Ensure entry for extensions drivers in /etc/neutron/plugins/ml2/ml2_conf.ini Restart neutron services after the below entry has been added.[ml2] extension_drivers = port_security

 

Modify heat’s policy.json file under /etc/heat/policy.json file to allow users in non-admin projects with ‘admin’ roles to create flavors."resource_types:OS::Nova::Flavor": "role:admin"

Install Tacker server

Before you install and configure Tacker server, you must create a database, service credentials, and API endpoints.

 

  1. To create the database, complete these steps:
    • Use the database access client to connect to the database server as the root user:mysql -u root -p
    • Create the tacker database:create database tacker;
    • Grant proper access to the tacker database: GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' \
      IDENTIFIED BY 'TACKER_DBPASS';
      GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' \
      IDENTIFIED BY 'TACKER_DBPASS';
        Replace ‘TACKER_DBPASS’ with a suitable password.
    • Exit the database access client.
  2. Source the admin credentials to gain access to adminonly CLI commands:source admin-openrc.sh
  3. To create the service credentials, complete these steps:
    • Create the tacker user:
    • openstack user create --domain default --password <PASSWORD> tacker
    • Replace <PASSWORD> with a suitable password
    • Add the admin role to the tacker user:
    • openstack role add --project services --user tacker admin
      Note: Project_name can be service or services. Verify the project_name under [keystone_authtoken] section in the /etc/nova/nova.conf file.
    • Create the tacker service
    • openstack service create --name tacker --description "nfv-orchestration" servicevm
    • Create the tacker service API endpoints:
      • openstack endpoint create --region RegionOne <Service Type or Service ID> public http:// <TACKER_NODE_IP> :8888
        openstack endpoint create --region RegionOne <Service Type or Service ID> admin http:// <TACKER_NODE_IP> :8888
        openstack endpoint create --region RegionOne <Service Type or Service ID> internal http:// <TACKER_NODE_IP> :8888
  4. Clone tacker repositorygit clone -b stable/liberty https://github.com/openstack/tacker
  5. 5. Install all requirements. The requirements.txt file contains a set of python-packages required to run Tacker-Servercd tacker   sudo pip install -r requirements.txt 
    Note: If OpenStack components mentioned in pre-requisites section have been installed, the below command would be sufficient. cd tacker   sudo pip install tosca-parser
  6. Install tackersudo python setup.py install
  7. Create ‘tacker’ directory in ‘/var/log’
    Note:The above referenced path ‘/var/log’ is for Ubuntu and may be different for other Operating Systems.sudo mkdir /var/log/tacker
  8. Edit tacker.conf to ensure the below entries:
    Note:

      1. In Ubuntu 14.04, the tacker.conf is located at /usr/local/etc/tacker/ and below ini sample is for Ubuntu and directory paths referred in ini may be different for other Operating Systems.
      2. Project_name can be service or services. Verify the project_name in [keystone_authtoken] section in the /etc/nova/nova.conf file.

    [DEFAULT]
    auth_strategy = keystone
    policy_file = /usr/local/etc/tacker/policy.json
    debug = True
    use_syslog = False
    state_path = /var/lib/tacker
    ...
    [keystone_ authtoken]
    project_name = services
    password = <TACKER_SERVICE_USER_PASSWORD>
    auth_url = http://<KEYSTONE_IP>:35357
    identity_uri = http://<KEYSTONE_IP>:5000
    auth_uri = http://<KEYSTONE_IP>:5000   ...
    [agent]
    root_helper = sudo /usr/local/bin/tacker-rootwrap
    /usr/local/etc/tacker/rootwrap.conf
    ...
    [DATABASE]
    connection =
    mysql://tacker:<TACKERDB_PASSWORD>@<MYSQL_IP>:3306/tacker?charset=utf8
    ...
    [servicevm_nova]
    password = <NOVA_SERVICE_USER_PASSWORD>
    auth_url = http://<NOVA_IP>:35357
    ...
    [servicev m_heat]
    heat_uri = http://<HEAT_IP>:8004/v1

  9. Populate Tacker database:
    Note:The below command is for Ubuntu Operating System/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head

Install Tacker client

  1. Clone tacker-client repository.cd ~/
    git clone -b stable/liberty https://github.com/openstack/python-tackerclient
  2. Install tacker-client.cd python-tackerclient
    sudo python setup.py install

Install Tacker horizon

  1. Clone tacker-horizon repository.cd ~/
    git clone -b stable/liberty https://github.com/openstack/tacker-horizon
  2. Install horizon module.cd tacker-horizon
    sudo python setup.py install 
  3. Enable tacker horizon in dashboard.
    Note:The below destination path referred is for Ubuntu 14.04 and may change for other Operating Systems.sudo cp openstack_dashboard_extensions/* /usr/share/openstack-dashboard/openstack_dashboard/enabled/
  4. Restart Apache serversudo service apache2 restart

Starting Tacker server

Note:Ensure that ml2_conf.ini as per Step 4 from the pre-requisites section has been configured.
sudo python /usr/local/bin/tacker-server --config-file /usr/local/etc/tacker/tacker.conf --log-file /var/log/tacker/tacker.log &

 

Testing Tacker

Run the following tacker commands to verify whether tacker is working finetacker ext-list
tacker vnf-list
tacker device-list

 

A simple set of vnfd-create, vnf-create and vnf-update commands are shown below.

tacker vnfd-create –name ${VNFD_NAME} –vnfd-file ${VNFD_TOSCA_YAML-FILE}

tacker vnf-create –name vnf-name –vnfd-id ${VNFD_ID}

tacker vnf-update –config “${CONFIG_DATA_YAML} ${VNF_ID}

If command-line tacker works fine, try out Tacker using Horizon (NFV left menu entry)

Now Tacker is ready, start to play !!

Openstack user data – ssh access of CentOS instance without key

You can use user-data to set a password for the user. When launching an instance you’ll paste this into the user-data:

#cloud-config
ssh_pwauth: True 
disable_root: false 
chpasswd:
  list: |
      user:password 
  expire: false

ssh_pwauth will turn on the ability to use password auth. disable_root will enable the root user. chpassword will setup a password for the user.

It’s not recommended to use this regularly. You should use this for testing. Otherwise you should be using the user + ssh-key.

This will allow you to SSH with a password. It didn’t work on Debian as the options were different in the sshd config, but should work with CentOS.

Ceph with devstack – part-1

Today going to see how to integrate ceph with devstack and mapping ceph as backend for nova, glance, cinder.

Ceph is a massively scalable, open source, distributed storage system. Ceph is in the Linux kernel and is integrated with the OpenStack cloud operating system.

 

Setup Dev Environment

Install OS-specific prerequisites:

sudo apt-get update
sudo apt-get install -y python-dev libssl-dev libxml2-dev \
                        libmysqlclient-dev libxslt-dev libpq-dev git \
                        libffi-dev gettext build-essential

Exercising the Services Using Devstack

This session has only been tested on Ubuntu 14.04 (Trusty), if you don’t have create on Virtual box with 4GB RAM, 100 GB HDD.

Clone devstack:

# Create a root directory for devstack if needed
sudo mkdir -p /opt/stack
sudo chown $USER /opt/stack

git clone https://git.openstack.org/openstack-dev/devstack /opt/stack/devstack

We will run devstack with minimal local.conf settings required to enable ceph plugin along with nova & heat, disable tempest, horizon which may slow down other services  here your localrc file

#[[local|localrc]]
#######
# MISC #
########
ADMIN_PASSWORD=admin
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
#SERVICE_TOKEN = <this is generated after running stack.sh>
# Reclone each time
#RECLONE=yes
# Enable Logging
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
#################
# PRE-REQUISITE #
#################
ENABLED_SERVICES=rabbit,mysql,key
#########
## CEPH #
#########
enable_plugin devstack-plugin-ceph https://github.com/openstack/devstack-plugin-ceph
# DevStack will create a loop-back disk formatted as XFS to store the
# Ceph data.
CEPH_LOOPBACK_DISK_SIZE=10G
# Ceph cluster fsid
CEPH_FSID=$(uuidgen)
# Glance pool, pgs and user
GLANCE_CEPH_USER=glance
GLANCE_CEPH_POOL=images
GLANCE_CEPH_POOL_PG=8
GLANCE_CEPH_POOL_PGP=8
# Nova pool and pgs
NOVA_CEPH_POOL=vms
NOVA_CEPH_POOL_PG=8
NOVA_CEPH_POOL_PGP=8
# Cinder pool, pgs and user
CINDER_CEPH_POOL=volumes
CINDER_CEPH_POOL_PG=8
CINDER_CEPH_POOL_PGP=8
CINDER_CEPH_USER=cinder
CINDER_CEPH_UUID=$(uuidgen)
# Cinder backup pool, pgs and user
CINDER_BAK_CEPH_POOL=backup
CINDER_BAK_CEPH_POOL_PG=8
CINDER_BAKCEPH_POOL_PGP=8
CINDER_BAK_CEPH_USER=cinder-bak
# How many replicas are to be configured for your Ceph cluster
CEPH_REPLICAS=${CEPH_REPLICAS:-1}
# Connect DevStack to an existing Ceph cluster
REMOTE_CEPH=False
REMOTE_CEPH_ADMIN_KEY_PATH=/etc/ceph/ceph.client.admin.keyring
#####################
## GLANCE – IMAGE SERVICE #
###########################
ENABLED_SERVICES+=,g-api,g-reg
##################################
## CINDER – BLOCK DEVICE SERVICE #
##################################
ENABLED_SERVICES+=,cinder,c-api,c-vol,c-sch,c-bak
CINDER_DRIVER=ceph
CINDER_ENABLED_BACKENDS=ceph
###########################
## NOVA – COMPUTE SERVICE #
###########################
ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-cond,n-sch,n-net
#EFAULT_INSTANCE_TYPE=m1.micro
#Enable heat services
ENABLED_SERVICES+=,h-eng h-api h-api-cfn h-api-cw
#Enable Tempest
#ENABLED_SERVICES+=tempest’ inside ‘local.config

Now run

~/devstack$ ./stack.sh

Devstack will clone with master & ceph will be enabled & mapped as backend for cinder, glance & nova with PG pool size 8, can create own size in multiples of 2 power like 64 as your wish.

Sit back a while to clone devstack and get result as like below

 

=========================
DevStack Component Timing
=========================
Total runtime 2169
run_process 26
apt-get-update 52
pip_install 99
restart_apache_server 5
wait_for_service 20
apt-get 1653
=========================
This is your host IP address: 10.0.2.15
This is your host IPv6 address: ::1
Keystone is serving at http://10.0.2.15/identity/
The default users are: admin and demo
The password: admin

 

Check the health of ceph with root permission, see “HEALTH_OK”

pandy@malai:~/devstack$ sudo ceph -s
cluster 6f461e23-8ddd-4668-9786-92d2d305f178
health HEALTH_OK
monmap e1: 1 mons at {malai=10.0.2.15:6789/0}
election epoch 1, quorum 0 malai
osdmap e16: 1 osds: 1 up, 1 in
pgmap v24: 88 pgs, 4 pools, 33091 kB data, 12 objects
194 MB used, 7987 MB / 8182 MB avail
88 active+clean

Here you go, ceph is installed with devstack

 

Openstack -Delete Error State Instances

Many of us known that while deleting VM, sometimes it will get stuck in error state, it may be of many reason, like issues in message queue, DB others,

As first way of troubleshooting people will do below steps

 

Delete by resetting the state of the VM

nova reset-state --active {uuid-of-instance}

Check its state by nova list or nova show {uuid-of-instance}

Then try to delete it using command

nova delete {uuid-of-instance}

OR doing the force-delete

nova force-delete {uuid-of-instance}

 

But still there few thingswhich  may persist your VM in error state, so I came up with remove the instance and all dependent records from the nova database and made it automated as below which removes all your dependent mapped volumes and delete instances and free your SG and others

#!/bin/bash

echo "Enter your MySQL user"
read MYSQL_USER

echo "Enter your MySQL user password"
read MYSQL_PASSWD

echo "Enter your MySQL host"
read MYSQL_HOST

mysql -u$MYSQL_USER -p$MYSQL_PASSWD -h$MYSQL_HOST -e 'USE nova; DELETE FROM security_group_instance_association WHERE instance_id IN (SELECT id FROM instances WHERE vm_state = "error");'
mysql -u$MYSQL_USER -p$MYSQL_PASSWD -h$MYSQL_HOST -e 'USE nova; DELETE FROM block_device_mapping WHERE instance_id IN (SELECT id FROM instances WHERE vm_state = "error");'
mysql -u$MYSQL_USER -p$MYSQL_PASSWD -h$MYSQL_HOST -e 'USE nova; DELETE FROM instance_info_caches WHERE instance_id IN (SELECT uuid FROM instances WHERE vm_state = "error");'
mysql -u$MYSQL_USER -p$MYSQL_PASSWD -h$MYSQL_HOST -e 'USE nova; UPDATE fixed_ips SET allocated = 0 WHERE instance_id IN (SELECT id FROM instances WHERE vm_state = "error");'
mysql -u$MYSQL_USER -p$MYSQL_PASSWD -h$MYSQL_HOST -e 'USE nova; DELETE FROM instances WHERE vm_state = "error";'

 

Here we go !!

 

Openstack Cinder backup & Restore

Cinder is the code name for the open source project to develop OpenStack Block Storage, the block-based storage component of the OpenStack platform for cloud computing.

Today we are going to see how to take backup & restore with back end storage either Ceph / LVM

For better lab practice clone devstack along with swift, cinder backup

Step 1:  Clone devstack

git clone https://github.com/openstack-dev/devstack.git

Step 2: Clone localrc contains cinder backup and swift

git clone https://github.com/maestropandy/openstack_localrc.git

Step 3: Copy the localrc from devstack-cinder-backup to devstack

cp localrc.txt devstack/localrc

Step 4: Deploy devstack as non-root user

./stack.sh

Step 5: Source with admin tenant

Source admin admin

Step 6:   Check installed service list, you can see cinder, swift

Step 7:  Run “Cinder service-list”

 

Step 8: Create Volume with 1GB

cinder create –display_name pandi 1

Step 9 : Create  backup

cinder backup-create b7223f13-fd5c-462b-9318-0c47b2a306f1

Step 10 : See the created backup

 cinder backup-list

 

Step 11 : List swift volumes backups will list the created backups from cinder & Restore the backup volumes as per below command

swift list volumebackups | grep 2014c866-5939-494d-ba26-2c78acfd0230

cinder backup-restore 2014c866-5939-494d-ba26-2c78acfd0230

 

See the above screenshocinder list

| ID | Status | Migration Status | Name | Size | Volume Type | Bootable | Multiattach | Attached to |

| b7223f13-fd5c-462b-9318-0c47b2a306f1 | available | – | pandy | 1 | lvmdriver-1 | false | False | |
| fc1c6f25-bde7-4f4b-aca8-05ad70fa1c1c | available | – | restore_backup_2014c866-5939-494d-ba26-2c78acfd0230 | 1 | lvmdriver-1 | false | False | |

Successfully created backup and restored in cinder as above

Here the Cinder.conf file changes under /etc/cinder

backup_swift_url = http://10.0.2.15:8080/v1/AUTH_

default_volume_type = lvmdriver-1

enabled_backends = lvmdriver-1

backup_driver = cinder.backup.drivers.swift

 

 

backup_swift_url = http://localhost:8080/v1/AUTH_
backup_swift_auth = per_user
backup_swift_auth_version = 1
backup_swift_user = <None>
backup_swift_key = <None>
backup_swift_container = volumebackups
backup_swift_object_size = 52428800
backup_swift_retry_attempts = 3
backup_swift_retry_backoff = 2
backup_compression_algorithm = zlib

 

Note: If you are in JUNO use below url, which is later fixed in next versions

backup_swift_url = http://localhost:8080/v1/AUTH_"

TryStack -Play around Openstack

Welcome, In this article we are going to see how to play with Trystack – The Easiest Way To Try Out OpenStack.

OpenStack is an open-source software cloud computing platform. OpenStack is primarily used for deploying an infrastructure as a service (IaaS) solution like Amazon Web Service (AWS). In other words, you can make your own AWS by using OpenStack. If you want to try out OpenStack, TryStack is the easiest and free way to do it.

1.png

In order to try OpenStack in TryStack, you must register yourself by joining TryStack Facebook Group. The acceptance of group needs a couple days because it’s approved manually. After you have been accepted in the TryStack Group, you can log in TryStack.

1

Overview: What we will do?

In this post, I will show you how to run an OpenStack instance. The instance will be accessible through the internet (have a public IP address). The final topology will like:

1.png

As you see from the image above, the instance will be connected to a local network and the local network will be connected to internet.

 

Step 1: Create Network

Network? Yes, the network in here is our own local network. So, your instances will be not mixed up with the others. You can imagine this as your own LAN (Local Area Network) in the cloud.

  1. Go to Network > Networks and then click Create Network.
  2. In Network tab, fill Network Name for example internal and then click Next.
  3. In Subnet tab,
    1. Fill Network Address with appropriate CIDR, for example 192.168.1.0/24. Useprivate network CIDR block as the best practice.
    2. Select IP Version with appropriate IP version, in this case IPv4.
    3. Click Next.
  4. In Subnet Details tab, fill DNS Name Servers with 8.8.8.8 (Google DNS) and then clickCreate.

Step 2: Create Instance

Now, we will create an instance. The instance is a virtual machine in the cloud, like AWS EC2. You need the instance to connect to the network that we just created in the previous step.

  1. Go to Compute > Instances and then click Launch Instance.
  2. In Details tab,
    1. Fill Instance Name, for example Ubuntu 1.
    2. Select Flavor, for example m1.medium.
    3. Fill Instance Count with 1.
    4. Select Instance Boot Source with Boot from Image.
    5. Select Image Name with Ubuntu 14.04 amd64 (243.7 MB) if you want install Ubuntu 14.04 in your virtual machine.
  3. In Access & Security tab,
    1. Click [+] button of Key Pair to import key pair. This key pair is a public and private key that we will use to connect to the instance from our machine.
    2. In Import Key Pair dialog,
      1. Fill Key Pair Name with your machine name (for example Edward-Key).
      2. Fill Public Key with your SSH public key (usually is in ~/.ssh/id_rsa.pub). See description in Import Key Pair dialog box for more information. If you are using Windows, you can use Puttygen to generate key pair.
      3. Click Import key pair.
    3. In Security Groups, mark/check default.
  4. In Networking tab,
    1. In Selected Networks, select network that have been created in Step 1, for exampleinternal.
  5. Click Launch.
  6. If you want to create multiple instances, you can repeat step 1-5. I created one more instance with instance name Ubuntu 2.

Step 3: Create Router

I guess you already know what router is. In the step 1, we created our network, but it is isolated. It doesn’t connect to the internet. To make our network has an internet connection, we need a router that running as the gateway to the internet.

  1. Go to Network > Routers and then click Create Router.
  2. Fill Router Name for example router1 and then click Create router.
  3. Click on your router name link, for example router1, Router Details page.
  4. Click Set Gateway button in upper right:
    1. Select External networks with external.
    2. Then OK.
  5. Click Add Interface button.
    1. Select Subnet with the network that you have been created in Step 1.
    2. Click Add interface.
  6. Go to Network > Network Topology. You will see the network topology. In the example, there are two network, i.e. external and internal, those are bridged by a router. There are instances those are joined to internal network.

Step 4: Configure Floating IP Address

Floating IP address is public IP address. It makes your instance is accessible from the internet. When you launch your instance, the instance will have a private network IP, but no public IP. In OpenStack, the public IPs is collected in a pool and managed by admin (in our case is TryStack). You need to request a public (floating) IP address to be assigned to your instance.

  1. Go to Compute > Instance.
  2. In one of your instances, click More > Associate Floating IP.
  3. In IP Address, click Plus [+].
  4. Select Pool to external and then click Allocate IP.
  5. Click Associate.
  6. Now you will get a public IP, e.g. 8.21.28.120, for your instance.

Step 5: Configure Access & Security

OpenStack has a feature like a firewall. It can whitelist/blacklist your in/out connection. It is called Security Group.

  1. Go to Compute > Access & Security and then open Security Groups tab.
  2. In default row, click Manage Rules.
  3. Click Add Rule, choose ALL ICMP rule to enable ping into your instance, and then clickAdd.
  4. Click Add Rule, choose HTTP rule to open HTTP port (port 80), and then click Add.
  5. Click Add Rule, choose SSH rule to open SSH port (port 22), and then click Add.
  6. You can open other ports by creating new rules.

Step 6: SSH to Your Instance

Now, you can SSH your instances to the floating IP address that you got in the step 4. If you are using Ubuntu image, the SSH user will be ubuntu.

That’s all, You can now do play around !! Enjoy !! Cheers !!