Openstack – Command line Cheat Sheet -1

Monitor OpenStack Service Logs

Here are some quick and dirty way to watch the necessary logs on the controller and compute nodes.

Ubuntu

Controller logs:

tail -f /var/log/{ceilometer,cinder,glance,keystone,mysql,neutron,nova,openvswitch,rabbitmq}/*.log /var/log/syslog

Compute logs:

tail -f /var/log/{ceilometer,neutron,nova,openvswitch}/*.log /var/log/syslog

CentOS/RHEL

Controller logs:

tail -f /var/log/{ceilometer,cinder,glance,keystone,mysql,neutron,nova,openvswitch,rabbitmq}/*.log /var/log/messages

Compute logs:

tail -f /var/log/{ceilometer,neutron,nova,openvswitch}/*.log /var/log/messages

Keystone

See Status of Keystone Services

keystone service-list

List All Keystone Endpoints

keystone endpoint-list

Glance

List Current Glance Images

glance image-list

Upload Images to Glance

glance image-create --name <IMAGE-NAME> --is-public <true OR false> --container-format <CONTAINER-FORMAT> --disk-format <DISK-FORMAT> --copy-from <URI>

Example 1: Upload the cirros-0.3.2-x86_64 OpenStack cloud image:

glance image-create --name cirros-0.3.2-x86_64 --is-public true --container-format bare --disk-format qcow2 --copy-from http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

Example 2: Upload the ubuntu-server-12.04 OpenStack cloud image:

glance image-create --name ubuntu-server-12.04 --is-public true --container-format bare --disk-format qcow2 --copy-from http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img

Example 3: Upload the centos-6.5 OpenStack cloud image:

glance image-create --name centos-6.5-x86_64 --is-public true --container-format bare --disk-format qcow2 --copy-from http://public.thornelabs.net/centos-6.5-20140117.0.x86_64.qcow2

Nova

See Status of Nova Services

nova service-list

List Current Nova Instances

nova list

Boot an Instance

Boot an instance assigned to a particular Neutron Network:

nova boot <INSTANCE-NAME> --image <GLANCE-IMAGE-ID> --flavor <FLAVOR-ID> --security-groups <SEC-GROUP-1,SEC-GROUP-2> --key-name <SSH-KEY-NAME> --nic net-id=<NET-ID> --availability-zone <AVAILABILITY-ZONE-NAME>

Boot an instance assigned to a particular Neutron Port:

nova boot <INSTANCE-NAME> --image <GLANCE-IMAGE-ID> --flavor <FLAVOR-ID> --security-groups <SEC-GROUP-1,SEC-GROUP-2> --key-name <SSH-KEY-NAME> --nic port-id=<PORT-ID> --availability-zone <AVAILABILITY-ZONE-NAME>

Create a Flavor

nova flavor-create <FLAVOR-NAME> <FLAVOR-ID> <RAM-IN-MB> <ROOT-DISK-IN-GB> <VCPU>

For example, create a new flavor called m1.custom with an ID of 6, 512 MB of RAM, 5 GB of root disk space, and 1 vCPU:

nova flavor-create m1.custom 6 512 5 1

Create Nova Security Group

This command is only used if you are using nova-network.

nova secgroup-create <NAME> <DESCRIPTION>

Add Rules to Nova Security Group

These command is only used if you are using nova-network.

nova secgroup-add-rule <NAME> <PROTOCOL> <BEGINNING-PORT> <ENDING-PORT> <SOURCE-SUBNET>

Example 1: Add a rule to the default Nova Security Group to allow SSH access to instances:

nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

Example 2: Add a rule to the default Nova Security Group Rule to allow ICMP communication to instances:

nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

Apply Nova Security Group to Instance

This command is only used if you are using nova-network.

nova add-secgroup <NOVA-ID> <SECURITY-GROUP-ID>

Create Nova Key SSH Pair

nova keypair-add --pub_key <SSH-PUBLIC-KEY-FILE-NAME> <NAME-OF-KEY>

Create Nova Floating IP Pool

nova-manage floating create <SUBNET-NAME> <NAME-OF-POOL>

Create Host Aggregate With Availability Zone

nova aggregate-create <HOST-AGG-NAME> <AVAIL-ZONE-NAME>

Add Compute Host to Host Aggregate

nova aggregate-add-host <HOST-AGG-ID> <COMPUTE-HOST-NAME>

Live Migrate an Instance

If your compute hosts use shared storage:

nova live-migration <INSTANCE-ID> <COMPUTE-HOST-ID>

If your compute hosts do not use shared storage:

nova live-migration --block-migrate <INSTANCE-ID> <COMPUTE-HOST-ID>

Attach Cinder Volume to Instance

Before running this command, you will need to have already created a Cinder Volume.

nova volume-attach <INSTANCE-ID> <CINDER-VOLUME-ID> <DEVICE (use auto)>

Create and Boot an Instance from a Cinder Volume

Before running this command, you will need to have already created a Cinder Volume from a Glance Image.

nova boot --flavor <FLAVOR-ID> --block_device_mapping vda=<CINDER-VOLUME-ID>:::0 <INSTANCE-NAME>

Create and Boot an Instance from a Cinder Volume Snapshot

Before running this command, you will have to have already created a Cinder Volume Snapshot:

nova boot --flavor <FLAVOR-ID> --block_device_mapping vda=<CINDER-SNAPSHOT-ID>:snap::0 <INSTANCE-NAME>

Reset the State of an Instance

If an instance gets stuck in a delete state, the instance state can be reset then deleted:

nova reset-state <INSTANCE-ID>

nova delete <INSTANCE-ID>

You can also use the active command line switch to force an instance back into an active state:

nova reset-state --active <INSTANCE-ID>

Get Direct URL to Instance Console Using novnc

nova get-vnc-console <INSTANCE-ID> novnc

Get Direct URL to Instance Console Using xvpvnc

nova get-vnc-console <INSTANCE-ID> xvpvnc

Set OpenStack Project Nova Quota

The following command will set an unlimited quota for a particular OpenStack Project:

nova quota-update --instances -1 --cores -1 --ram -1 --floating-ips -1 --fixed-ips -1 --metadata-items -1 --injected-files -1 --injected-file-content-bytes -1 --injected-file-path-bytes -1 --key-pairs -1 --security-groups -1 --security-group-rules -1 --server-groups -1 --server-group-members -1 <PROJECT ID>

Cinder

See Status of Cinder Services

cinder service-list

List Current Cinder Volumes

cinder list

Create Cinder Volume

cinder create --display-name <CINDER-IMAGE-DISPLAY-NAME> <SIZE-IN-GB>

Create Cinder Volume from Glance Image

cinder create --image-id <GLANCE-IMAGE-ID> --display-name <CINDER-IMAGE-DISPLAY-NAME> <SIZE-IN-GB>

Create Snapshot of Cinder Volume

cinder snapshot-create --display-name <SNAPSHOT-DISPLAY-NAME> <CINDER-VOLUME-ID>

If the Cinder Volume is not available, i.e. it is currently attached to an instance, you must pass the force flag:

cinder snapshot-create --display-name <SNAPSHOT-DISPLAY-NAME> <CINDER-VOLUME-ID> --force True

Neutron

See Status of Neutron Services

neutron agent-list

List Current Neutron Networks

neutron net-list

List Current Neutron Subnets

neutron subnet-list

Rename Neutron Network

neutron net-update <CURRENT-NET-NAME> --name <NEW-NET-NAME>

Rename Neutron Subnet

neutron subnet-update <CURRENT-SUBNET-NAME> --name <NEW-SUBNET-NAME>

Create Neutron Security Group

neutron security-group-create <SEC-GROUP-NAME>

Add Rules to Neutron Security Group

neutron security-group-rule-create --direction <ingress OR egress> --ethertype <IPv4 or IPv6> --protocol <PROTOCOL> --port-range-min <PORT-NUMBER> --port-range-max <PORT-NUMBER> <SEC-GROUP-NAME>

Example 1: Add a rule to the default Neutron Security Group to allow SSH access to instances:

neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 22 --port-range-max 22 default

Example 2: Add a rule to the default Neutron Security Group to allow ICMP communication to instances:

neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol icmp default

Create a Neutron Tenant Network

neutron net-create <NET-NAME>

neutron subnet-create --name <SUBNET-NAME> <NET-NAME> <SUBNET-CIDR>

Create a Neutron Provider Network

neutron net-create <NET-NAME> --provider:physical_network=<LABEL-PHYSICAL-INTERFACE> --provider:network_type=<flat or vlan> --shared --router:external=True

neutron subnet-create --name <SUBNET-NAME> <NET-NAME> <SUBNET-CIDR>  --gateway <GATEWAY-IP> --allocation-pool start=<STARTING-IP>,end=<ENDING-IP> --dns-nameservers list=true <DNS-IP-1 DNS-IP-2>

Create a Neutron Router

neutron router-create <ROUTER-NAME>

Set Default Gateway on a Neutron Router

neutron router-gateway-set <ROUTER-NAME> <NET-NAME>

Attach a Tenant Network to a Neutron Router

neutron router-interface-add <ROUTER-NAME> <SUBNET-NAME>

Create a Neutron Floating IP Pool

If you need N number of floating IP addresses, run this command N number of times:

neutron floatingip-create <NET-NAME>

Assign a Neutron Floating IP Address to an Instances

neutron floatingip-associate <FLOATING-IP-ID> <NEUTRON-PORT-ID>

Create a Neutron Port with a Fixed IP Address

neutron port-create <NET-NAME> --fixed-ip ip_address=<IP-ADDRESS>

Set OpenStack Project Neutron Quota

The following command will allow an unlimited number of Neutron Ports to be created within a particular OpenStack Project:

neutron quota-update --tenant-id=<PROJECT ID> --port -1

References

Appendix A. OpenStack command-line interface cheat sheet

Openstack – Show usage statistics for hosts and instances

Hi All,

Here we going to see how to calculate resource usage for individual tenant and instances

Show host usage statistics

The following examples show the host usage statistics for a host called devstack.

  • List the hosts and the nova-related services that run on them:

    $ nova host-list
    +-----------+-------------+----------+
    | host_name | service     | zone     |
    +-----------+-------------+----------+
    | devstack  | conductor   | internal |
    | devstack  | compute     | nova     |
    | devstack  | cert        | internal |
    | devstack  | network     | internal |
    | devstack  | scheduler   | internal |
    | devstack  | consoleauth | internal |
    +-----------+-------------+----------+
  • Get a summary of resource usage of all of the instances running on the host:

    $ nova host-describe devstack
    +----------+----------------------------------+-----+-----------+---------+
    | HOST     | PROJECT                          | cpu | memory_mb | disk_gb |
    +----------+----------------------------------+-----+-----------+---------+
    | devstack | (total)                          | 2   | 4003      | 157     |
    | devstack | (used_now)                       | 3   | 5120      | 40      |
    | devstack | (used_max)                       | 3   | 4608      | 40      |
    | devstack | b70d90d65e464582b6b2161cf3603ced | 1   | 512       | 0       |
    | devstack | 66265572db174a7aa66eba661f58eb9e | 2   | 4096      | 40      |
    +----------+----------------------------------+-----+-----------+---------+

    The cpu column shows the sum of the virtual CPUs for instances running on the host.

    The memory_mb column shows the sum of the memory (in MB) allocated to the instances that run on the host.

    The disk_gb column shows the sum of the root and ephemeral disk sizes (in GB) of the instances that run on the host.

    The row that has the value used_now in the PROJECT column shows the sum of the resources allocated to the instances that run on the host, plus the resources allocated to the virtual machine of the host itself.

    The row that has the value used_max in the PROJECT column shows the sum of the resources allocated to the instances that run on the host.

    Note

    These values are computed by using information about the flavors of the instances that run on the hosts. This command does not query the CPU usage, memory usage, or hard disk usage of the physical host.

Show instance usage statistics

  • Get CPU, memory, I/O, and network statistics for an instance.

    1. List instances:

      $ nova list
      +----------+----------------------+--------+------------+-------------+------------------+
      | ID       | Name                 | Status | Task State | Power State | Networks         |
      +----------+----------------------+--------+------------+-------------+------------------+
      | 84c6e... | myCirrosServer       | ACTIVE | None       | Running     | private=10.0.0.3 |
      | 8a995... | myInstanceFromVolume | ACTIVE | None       | Running     | private=10.0.0.4 |
      +----------+----------------------+--------+------------+-------------+------------------+
      
    2. Get diagnostic statistics:

      $ nova diagnostics myCirrosServer
      +------------------+----------------+
      | Property         | Value          |
      +------------------+----------------+
      | vnet1_rx         | 1210744        |
      | cpu0_time        | 19624610000000 |
      | vda_read         | 0              |
      | vda_write        | 0              |
      | vda_write_req    | 0              |
      | vnet1_tx         | 863734         |
      | vnet1_tx_errors  | 0              |
      | vnet1_rx_drop    | 0              |
      | vnet1_tx_packets | 3855           |
      | vnet1_tx_drop    | 0              |
      | vnet1_rx_errors  | 0              |
      | memory           | 2097152        |
      | vnet1_rx_packets | 5485           |
      | vda_read_req     | 0              |
      | vda_errors       | -1             |
      +------------------+----------------+
      
  • Get summary statistics for each tenant:

    $ nova usage-list
    Usage from 2013-06-25 to 2013-07-24:
    +----------------------------------+-----------+--------------+-----------+---------------+
    | Tenant ID                        | Instances | RAM MB-Hours | CPU Hours | Disk GB-Hours |
    +----------------------------------+-----------+--------------+-----------+---------------+
    | b70d90d65e464582b6b2161cf3603ced | 1         | 344064.44    | 672.00    | 0.00          |
    | 66265572db174a7aa66eba661f58eb9e | 3         | 671626.76    | 327.94    | 6558.86       |
    +----------------------------------+-----------+--------------+-----------+---------------+

Openstack – Create and Manage Instances via CLI

SUMMARY

Instances are virtual machines that run inside the cloud. You can launch an instance from the following sources:

  • Images uploaded to the OpenStack Glance Image service (ephemeral instance).
  • Image that you have copied to a persistent volume (persistent instance).

 

GATHER DETAILS FOR INSTANCE LAUNCH

Before you can launch an instance, gather the following parameters:

  • The instance source can be an image, snapshot, or block storage volume that contains an image or snapshot.
  • A name for your instance.
  • The flavor for your instance, which defines the compute, memory, and storage capacity of nova computing instances. A flavor is an available hardware configuration for a server. It defines the size of a virtual server that can be launched.
  • Any user data files. A user data file is a special key in the metadata service that holds a file that cloud-aware applications in the guest instance can access. For example, one application that uses user data is the cloud-init system, which is an open-source package from Ubuntu that is available on various Linux distributions and that handles early initialization of a cloud instance.
  • Access and security credentials, which include one or both of the following credentials:
  • A key pair for your instance, which are SSH credentials that are injected into images when they are launched. For the key pair to be successfully injected, the image must contain the cloud-init package. Create at least one key pair for each project. If you already have generated a key pair with an external tool, you can import it into OpenStack. You can use the key pair for multiple instances that belong to that project.
  • A security group that defines which incoming network traffic is forwarded to instances. Security groups hold a set of firewall policies, known as security group rules.
  • If needed, you can assign a floating (public) IP address to a running instance.
  • You can also attach a block storage device, or volume, for persistent storage.

You can gather these parameters the following way:

 

Before you begin, source the openrc file. The proceed as follows:

List the available flavors:


Make note of the flavor ID and copy the variable to paste into the nova boot command.

List the available images:


Make note of the image ID and copy the variable to paste into the nova boot command.

List the available security groups:


Make note of the security group ID and copy the variable to paste into the nova boot command.

List the available key pairs, and note the key pair name that you use for SSH access:


Make note of the keypair name and copy the variable to paste into the nova boot command.

List the available networks, and note the network name that you will use for the instance:


Make note of the network ID and copy the variable to paste into the nova boot command.

LAUNCH INSTANCE

After you gather required parameters, run the following command to launch an instance. Specify the server name, flavor ID, and image ID:

For example, using the parameters above we executed the command as follows:

A status of BUILD indicates that the instance has started, but is not yet online.

A status of ACTIVE indicates that the instance is active.

Once the nova boot command has been executed you can view your instances using the following command:

And to see the details of the instance:

Lastly, if you wish to delete the instance you can execute the following command:

Openstack – creating volume stuck with “creating” status

Hi all,

 

Let me explain a little bit more on the workflow of creating a volume:

1) User sends request to Cinder API service;

2) API creates a DB entry for the volume and marks its status to ‘creating’ (https://github.com/openstack/cinder/blob/stable/havana/cinder/volume/flows/create_volume/__init__.py#L545) and sends a RPC message to scheduler;

3) scheduler picks up the message and makes placement decision and if a back-end is available, it sends the request via RPC to volume service;

4) volume service picks up the message to perform the real job creating a volume for user.

There are multiple cases in which a volume’s status can be stuck in ‘creating’:

a) something wrong happened during RPC message being processed by scheduler (e.g. scheduler service is down refer bug : bug: https://review.openstack.org/#/c/64014/ – related to this change message is lost, scheduler service goes down while scheduler processing the message);

b) something went wrong AFTER backend is chosen, which means scheduler successfully sends out the message to target back-end, but somehow the message isn’t picked up by target volume service or there is unhandled exception while volume service handling the request.

If the cinder volume creation passed api, scheduler and get stuck in creation in volume, restart services again using

 

service openstack-cinder-volume restart
    Cinder services restart commands 

    service openstack-cinder-api restart
    service openstack-cinder-backup restart
    service openstack-cinder-scheduler restart
    service openstack-cinder-volume restart

Thanks !!

Openstack – Cannot connect to instance

SUMMARY

Admin/User has successfully created an instance, however, they are unable to gain access to the instance via SSH or RDP.

 

HOW TO DETERMINE ROOT CAUSE AND SOLVE THE PROBLEM

Generally, this problem comes as a result of misconfiguration of the security group the instance the customer launched is associated with or if the customer is trying to access the instance remotely without having a floating IP assigned. The first thing to do in this case would be to verify that a floating IP has been assigned to the instance if the instance is to be accessed remotely. You can verify this on the instances menu on Horizon. If you do not see a floating IP assigned please proceed to adding one by selecting the actions column drop down menu and selecting “Associate Floating IP”.

Once a floating IP has been verified the next step will be to verify that the security group associated to the instance(s) has the following security group rules within in it:

Additionally, if the instance is Linux based you will want to make sure you have port 22 for SSH open. Additionally, if the the instance is Windows based you will want to open port 3389 for RDP. For example:

If these two rules are open and the connectivity issues persists the suggestion at this point would be to verify if the instance requires a key pair for access. You can check if that is the case by going to the instances pace and checking the Key Pair field to see if it is populated. For example:

If the Key Pair is populated (as it is in this case) you will need to access your instance using a special SSH flag when you execute the SSH command on the command line:

 

root@pandy-dev: ~# ssh -i test.pem root@<server IP/Hostname>

Note that the test.pem file is named exactly like the populated field in the example image above. You must use the same key pair otherwise you will encounter authentication issues with the instance.

If at this point the issue persists, there are a number of things that might be causing this issue. At this point it would be best for you to submit a support ticket and provide us the output of the following from all of your controller nodes:

#pcs status

#nova service-list

#neutron agent-list

#rabbitmqctl status

#nova show <INSTANCE ID>

#nova console-log <INSTANCE ID>

#rabbitmqctl list_queues | grep -v “0$”

#/var/log/nova-all.log

#/var/log/neutron-all.log

#/var/log/rabbitmq/rabbit@node-x.log

Openstack – CREATE A NETWORK AND SUBNET VIA THE CLI

SUMMARY

The OpenStack Networking service provides a scalable system for managing the network connectivity within an OpenStack cloud deployment. It can easily and quickly react to changing network needs (for example, creating and assigning new IP addresses).

Networking in OpenStack is complex. This section provides the basic instructions for creating a network and a router. For detailed information about managing networks, refer to the OpenStack Cloud Administrator Guide.

CREATE A NETWORK AND SUBNET VIA THE CLI

  1. Create Network:

2) Create a Subnet:

The subnet-create command has the following positional and optional parameters:

  • The name or ID of the network to which the subnet belongs.
  • In this example, net1 is a positional argument that specifies the network name.
  • The CIDR of the subnet.
  • In this example, 192.168.2.0/24 is a positional argument that specifies the CIDR.
  • The subnet name, which is optional.
  • In this example, –name subnet1 specifies the name of the subnet.

For information and examples on more advanced use of neutron’s subnet subcommand, see the Cloud Administrator Guide.

CREATE A ROUTER VIA THE CLI

  1. Create a Router:

  1. Link the router to the external provider network:

 

Link the router to the subnet:

Install Packages to run script from cygwin on virtual box

Hi All,

 

Today we are going to talk about, packages required to run shell script or any script on virtual via cygwin

If we want to create new VM in virtual box we can provision via Oracle Virtual box UI, also can create via virtual box shell scripting, assume you prepared with virtual box script (if want to know how prepare virtual box script will post in next blog)

There are few packages required to run script

  • expect, openssh, and procps 

Lets see how to install expect and openssh/ssh, procps

Installation of Cygwin, expect and Ssh
1. Download cygwin from http://www.cygwin.com/install.html
2. Run set up.exe file and select install from internet option
3. Select desired root directory and Local package directory
4. Select your Internet connection as Direct connection
5. From the Download site Choose Available Download site
6. Select packages expect, openssh, and procps  to install
7. For installing expect, procps select  > TCL
8. For installing ssh search by package name openssh and select > Net
9. Click install
10. Once it is installed just enter expect command on prompt which should
display expect1.1

 

Automating login to headend using expect tool installed in Cygwin
1. Create a file namely sshlogin.exp and give execute permissions
2. Update the file with the given code snippet

#!/usr/bin/expect

set timeout 20

set ip [lindex $argv 0]

set user [lindex $argv 1]

set password [lindex $argv 2]

spawn ssh “$user\@$ip”

expect “Password:”

send “$password\r”;

send “cd /export/home/dncsop/Automation_Resources/\n”

send “ls\n”

interact

exit 0

3.Execute the script as ./sshlogin.exp ipaddress username passwd
e.g : ./sshlogin.exp 10.78.203.115 root password

 

Now the packages are installed, to run virtual box script (Desktop/Virtualbox/launch.sh)

Open Cygwin and run

cd /cygdrive/c/Users/{name}/Desktop/virtualbox

sh launch.sh

Now the Procps packages will check the free space like RAM/Disk utility as mentioned in virtual box script and will spin VM instances inside oracle virtual box.

 

Thanks for reading !!