Openstack – Command line Cheat Sheet -1

Monitor OpenStack Service Logs

Here are some quick and dirty way to watch the necessary logs on the controller and compute nodes.

Ubuntu

Controller logs:

tail -f /var/log/{ceilometer,cinder,glance,keystone,mysql,neutron,nova,openvswitch,rabbitmq}/*.log /var/log/syslog

Compute logs:

tail -f /var/log/{ceilometer,neutron,nova,openvswitch}/*.log /var/log/syslog

CentOS/RHEL

Controller logs:

tail -f /var/log/{ceilometer,cinder,glance,keystone,mysql,neutron,nova,openvswitch,rabbitmq}/*.log /var/log/messages

Compute logs:

tail -f /var/log/{ceilometer,neutron,nova,openvswitch}/*.log /var/log/messages

Keystone

See Status of Keystone Services

keystone service-list

List All Keystone Endpoints

keystone endpoint-list

Glance

List Current Glance Images

glance image-list

Upload Images to Glance

glance image-create --name <IMAGE-NAME> --is-public <true OR false> --container-format <CONTAINER-FORMAT> --disk-format <DISK-FORMAT> --copy-from <URI>

Example 1: Upload the cirros-0.3.2-x86_64 OpenStack cloud image:

glance image-create --name cirros-0.3.2-x86_64 --is-public true --container-format bare --disk-format qcow2 --copy-from http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

Example 2: Upload the ubuntu-server-12.04 OpenStack cloud image:

glance image-create --name ubuntu-server-12.04 --is-public true --container-format bare --disk-format qcow2 --copy-from http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img

Example 3: Upload the centos-6.5 OpenStack cloud image:

glance image-create --name centos-6.5-x86_64 --is-public true --container-format bare --disk-format qcow2 --copy-from http://public.thornelabs.net/centos-6.5-20140117.0.x86_64.qcow2

Nova

See Status of Nova Services

nova service-list

List Current Nova Instances

nova list

Boot an Instance

Boot an instance assigned to a particular Neutron Network:

nova boot <INSTANCE-NAME> --image <GLANCE-IMAGE-ID> --flavor <FLAVOR-ID> --security-groups <SEC-GROUP-1,SEC-GROUP-2> --key-name <SSH-KEY-NAME> --nic net-id=<NET-ID> --availability-zone <AVAILABILITY-ZONE-NAME>

Boot an instance assigned to a particular Neutron Port:

nova boot <INSTANCE-NAME> --image <GLANCE-IMAGE-ID> --flavor <FLAVOR-ID> --security-groups <SEC-GROUP-1,SEC-GROUP-2> --key-name <SSH-KEY-NAME> --nic port-id=<PORT-ID> --availability-zone <AVAILABILITY-ZONE-NAME>

Create a Flavor

nova flavor-create <FLAVOR-NAME> <FLAVOR-ID> <RAM-IN-MB> <ROOT-DISK-IN-GB> <VCPU>

For example, create a new flavor called m1.custom with an ID of 6, 512 MB of RAM, 5 GB of root disk space, and 1 vCPU:

nova flavor-create m1.custom 6 512 5 1

Create Nova Security Group

This command is only used if you are using nova-network.

nova secgroup-create <NAME> <DESCRIPTION>

Add Rules to Nova Security Group

These command is only used if you are using nova-network.

nova secgroup-add-rule <NAME> <PROTOCOL> <BEGINNING-PORT> <ENDING-PORT> <SOURCE-SUBNET>

Example 1: Add a rule to the default Nova Security Group to allow SSH access to instances:

nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

Example 2: Add a rule to the default Nova Security Group Rule to allow ICMP communication to instances:

nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

Apply Nova Security Group to Instance

This command is only used if you are using nova-network.

nova add-secgroup <NOVA-ID> <SECURITY-GROUP-ID>

Create Nova Key SSH Pair

nova keypair-add --pub_key <SSH-PUBLIC-KEY-FILE-NAME> <NAME-OF-KEY>

Create Nova Floating IP Pool

nova-manage floating create <SUBNET-NAME> <NAME-OF-POOL>

Create Host Aggregate With Availability Zone

nova aggregate-create <HOST-AGG-NAME> <AVAIL-ZONE-NAME>

Add Compute Host to Host Aggregate

nova aggregate-add-host <HOST-AGG-ID> <COMPUTE-HOST-NAME>

Live Migrate an Instance

If your compute hosts use shared storage:

nova live-migration <INSTANCE-ID> <COMPUTE-HOST-ID>

If your compute hosts do not use shared storage:

nova live-migration --block-migrate <INSTANCE-ID> <COMPUTE-HOST-ID>

Attach Cinder Volume to Instance

Before running this command, you will need to have already created a Cinder Volume.

nova volume-attach <INSTANCE-ID> <CINDER-VOLUME-ID> <DEVICE (use auto)>

Create and Boot an Instance from a Cinder Volume

Before running this command, you will need to have already created a Cinder Volume from a Glance Image.

nova boot --flavor <FLAVOR-ID> --block_device_mapping vda=<CINDER-VOLUME-ID>:::0 <INSTANCE-NAME>

Create and Boot an Instance from a Cinder Volume Snapshot

Before running this command, you will have to have already created a Cinder Volume Snapshot:

nova boot --flavor <FLAVOR-ID> --block_device_mapping vda=<CINDER-SNAPSHOT-ID>:snap::0 <INSTANCE-NAME>

Reset the State of an Instance

If an instance gets stuck in a delete state, the instance state can be reset then deleted:

nova reset-state <INSTANCE-ID>

nova delete <INSTANCE-ID>

You can also use the active command line switch to force an instance back into an active state:

nova reset-state --active <INSTANCE-ID>

Get Direct URL to Instance Console Using novnc

nova get-vnc-console <INSTANCE-ID> novnc

Get Direct URL to Instance Console Using xvpvnc

nova get-vnc-console <INSTANCE-ID> xvpvnc

Set OpenStack Project Nova Quota

The following command will set an unlimited quota for a particular OpenStack Project:

nova quota-update --instances -1 --cores -1 --ram -1 --floating-ips -1 --fixed-ips -1 --metadata-items -1 --injected-files -1 --injected-file-content-bytes -1 --injected-file-path-bytes -1 --key-pairs -1 --security-groups -1 --security-group-rules -1 --server-groups -1 --server-group-members -1 <PROJECT ID>

Cinder

See Status of Cinder Services

cinder service-list

List Current Cinder Volumes

cinder list

Create Cinder Volume

cinder create --display-name <CINDER-IMAGE-DISPLAY-NAME> <SIZE-IN-GB>

Create Cinder Volume from Glance Image

cinder create --image-id <GLANCE-IMAGE-ID> --display-name <CINDER-IMAGE-DISPLAY-NAME> <SIZE-IN-GB>

Create Snapshot of Cinder Volume

cinder snapshot-create --display-name <SNAPSHOT-DISPLAY-NAME> <CINDER-VOLUME-ID>

If the Cinder Volume is not available, i.e. it is currently attached to an instance, you must pass the force flag:

cinder snapshot-create --display-name <SNAPSHOT-DISPLAY-NAME> <CINDER-VOLUME-ID> --force True

Neutron

See Status of Neutron Services

neutron agent-list

List Current Neutron Networks

neutron net-list

List Current Neutron Subnets

neutron subnet-list

Rename Neutron Network

neutron net-update <CURRENT-NET-NAME> --name <NEW-NET-NAME>

Rename Neutron Subnet

neutron subnet-update <CURRENT-SUBNET-NAME> --name <NEW-SUBNET-NAME>

Create Neutron Security Group

neutron security-group-create <SEC-GROUP-NAME>

Add Rules to Neutron Security Group

neutron security-group-rule-create --direction <ingress OR egress> --ethertype <IPv4 or IPv6> --protocol <PROTOCOL> --port-range-min <PORT-NUMBER> --port-range-max <PORT-NUMBER> <SEC-GROUP-NAME>

Example 1: Add a rule to the default Neutron Security Group to allow SSH access to instances:

neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 22 --port-range-max 22 default

Example 2: Add a rule to the default Neutron Security Group to allow ICMP communication to instances:

neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol icmp default

Create a Neutron Tenant Network

neutron net-create <NET-NAME>

neutron subnet-create --name <SUBNET-NAME> <NET-NAME> <SUBNET-CIDR>

Create a Neutron Provider Network

neutron net-create <NET-NAME> --provider:physical_network=<LABEL-PHYSICAL-INTERFACE> --provider:network_type=<flat or vlan> --shared --router:external=True

neutron subnet-create --name <SUBNET-NAME> <NET-NAME> <SUBNET-CIDR>  --gateway <GATEWAY-IP> --allocation-pool start=<STARTING-IP>,end=<ENDING-IP> --dns-nameservers list=true <DNS-IP-1 DNS-IP-2>

Create a Neutron Router

neutron router-create <ROUTER-NAME>

Set Default Gateway on a Neutron Router

neutron router-gateway-set <ROUTER-NAME> <NET-NAME>

Attach a Tenant Network to a Neutron Router

neutron router-interface-add <ROUTER-NAME> <SUBNET-NAME>

Create a Neutron Floating IP Pool

If you need N number of floating IP addresses, run this command N number of times:

neutron floatingip-create <NET-NAME>

Assign a Neutron Floating IP Address to an Instances

neutron floatingip-associate <FLOATING-IP-ID> <NEUTRON-PORT-ID>

Create a Neutron Port with a Fixed IP Address

neutron port-create <NET-NAME> --fixed-ip ip_address=<IP-ADDRESS>

Set OpenStack Project Neutron Quota

The following command will allow an unlimited number of Neutron Ports to be created within a particular OpenStack Project:

neutron quota-update --tenant-id=<PROJECT ID> --port -1

References

Appendix A. OpenStack command-line interface cheat sheet

Openstack – Show usage statistics for hosts and instances

Hi All,

Here we going to see how to calculate resource usage for individual tenant and instances

Show host usage statistics

The following examples show the host usage statistics for a host called devstack.

  • List the hosts and the nova-related services that run on them:

    $ nova host-list
    +-----------+-------------+----------+
    | host_name | service     | zone     |
    +-----------+-------------+----------+
    | devstack  | conductor   | internal |
    | devstack  | compute     | nova     |
    | devstack  | cert        | internal |
    | devstack  | network     | internal |
    | devstack  | scheduler   | internal |
    | devstack  | consoleauth | internal |
    +-----------+-------------+----------+
  • Get a summary of resource usage of all of the instances running on the host:

    $ nova host-describe devstack
    +----------+----------------------------------+-----+-----------+---------+
    | HOST     | PROJECT                          | cpu | memory_mb | disk_gb |
    +----------+----------------------------------+-----+-----------+---------+
    | devstack | (total)                          | 2   | 4003      | 157     |
    | devstack | (used_now)                       | 3   | 5120      | 40      |
    | devstack | (used_max)                       | 3   | 4608      | 40      |
    | devstack | b70d90d65e464582b6b2161cf3603ced | 1   | 512       | 0       |
    | devstack | 66265572db174a7aa66eba661f58eb9e | 2   | 4096      | 40      |
    +----------+----------------------------------+-----+-----------+---------+

    The cpu column shows the sum of the virtual CPUs for instances running on the host.

    The memory_mb column shows the sum of the memory (in MB) allocated to the instances that run on the host.

    The disk_gb column shows the sum of the root and ephemeral disk sizes (in GB) of the instances that run on the host.

    The row that has the value used_now in the PROJECT column shows the sum of the resources allocated to the instances that run on the host, plus the resources allocated to the virtual machine of the host itself.

    The row that has the value used_max in the PROJECT column shows the sum of the resources allocated to the instances that run on the host.

    Note

    These values are computed by using information about the flavors of the instances that run on the hosts. This command does not query the CPU usage, memory usage, or hard disk usage of the physical host.

Show instance usage statistics

  • Get CPU, memory, I/O, and network statistics for an instance.

    1. List instances:

      $ nova list
      +----------+----------------------+--------+------------+-------------+------------------+
      | ID       | Name                 | Status | Task State | Power State | Networks         |
      +----------+----------------------+--------+------------+-------------+------------------+
      | 84c6e... | myCirrosServer       | ACTIVE | None       | Running     | private=10.0.0.3 |
      | 8a995... | myInstanceFromVolume | ACTIVE | None       | Running     | private=10.0.0.4 |
      +----------+----------------------+--------+------------+-------------+------------------+
      
    2. Get diagnostic statistics:

      $ nova diagnostics myCirrosServer
      +------------------+----------------+
      | Property         | Value          |
      +------------------+----------------+
      | vnet1_rx         | 1210744        |
      | cpu0_time        | 19624610000000 |
      | vda_read         | 0              |
      | vda_write        | 0              |
      | vda_write_req    | 0              |
      | vnet1_tx         | 863734         |
      | vnet1_tx_errors  | 0              |
      | vnet1_rx_drop    | 0              |
      | vnet1_tx_packets | 3855           |
      | vnet1_tx_drop    | 0              |
      | vnet1_rx_errors  | 0              |
      | memory           | 2097152        |
      | vnet1_rx_packets | 5485           |
      | vda_read_req     | 0              |
      | vda_errors       | -1             |
      +------------------+----------------+
      
  • Get summary statistics for each tenant:

    $ nova usage-list
    Usage from 2013-06-25 to 2013-07-24:
    +----------------------------------+-----------+--------------+-----------+---------------+
    | Tenant ID                        | Instances | RAM MB-Hours | CPU Hours | Disk GB-Hours |
    +----------------------------------+-----------+--------------+-----------+---------------+
    | b70d90d65e464582b6b2161cf3603ced | 1         | 344064.44    | 672.00    | 0.00          |
    | 66265572db174a7aa66eba661f58eb9e | 3         | 671626.76    | 327.94    | 6558.86       |
    +----------------------------------+-----------+--------------+-----------+---------------+

Openstack – Create and Manage Instances via CLI

SUMMARY

Instances are virtual machines that run inside the cloud. You can launch an instance from the following sources:

  • Images uploaded to the OpenStack Glance Image service (ephemeral instance).
  • Image that you have copied to a persistent volume (persistent instance).

 

GATHER DETAILS FOR INSTANCE LAUNCH

Before you can launch an instance, gather the following parameters:

  • The instance source can be an image, snapshot, or block storage volume that contains an image or snapshot.
  • A name for your instance.
  • The flavor for your instance, which defines the compute, memory, and storage capacity of nova computing instances. A flavor is an available hardware configuration for a server. It defines the size of a virtual server that can be launched.
  • Any user data files. A user data file is a special key in the metadata service that holds a file that cloud-aware applications in the guest instance can access. For example, one application that uses user data is the cloud-init system, which is an open-source package from Ubuntu that is available on various Linux distributions and that handles early initialization of a cloud instance.
  • Access and security credentials, which include one or both of the following credentials:
  • A key pair for your instance, which are SSH credentials that are injected into images when they are launched. For the key pair to be successfully injected, the image must contain the cloud-init package. Create at least one key pair for each project. If you already have generated a key pair with an external tool, you can import it into OpenStack. You can use the key pair for multiple instances that belong to that project.
  • A security group that defines which incoming network traffic is forwarded to instances. Security groups hold a set of firewall policies, known as security group rules.
  • If needed, you can assign a floating (public) IP address to a running instance.
  • You can also attach a block storage device, or volume, for persistent storage.

You can gather these parameters the following way:

 

Before you begin, source the openrc file. The proceed as follows:

List the available flavors:


Make note of the flavor ID and copy the variable to paste into the nova boot command.

List the available images:


Make note of the image ID and copy the variable to paste into the nova boot command.

List the available security groups:


Make note of the security group ID and copy the variable to paste into the nova boot command.

List the available key pairs, and note the key pair name that you use for SSH access:


Make note of the keypair name and copy the variable to paste into the nova boot command.

List the available networks, and note the network name that you will use for the instance:


Make note of the network ID and copy the variable to paste into the nova boot command.

LAUNCH INSTANCE

After you gather required parameters, run the following command to launch an instance. Specify the server name, flavor ID, and image ID:

For example, using the parameters above we executed the command as follows:

A status of BUILD indicates that the instance has started, but is not yet online.

A status of ACTIVE indicates that the instance is active.

Once the nova boot command has been executed you can view your instances using the following command:

And to see the details of the instance:

Lastly, if you wish to delete the instance you can execute the following command:

Openstack – creating volume stuck with “creating” status

Hi all,

 

Let me explain a little bit more on the workflow of creating a volume:

1) User sends request to Cinder API service;

2) API creates a DB entry for the volume and marks its status to ‘creating’ (https://github.com/openstack/cinder/blob/stable/havana/cinder/volume/flows/create_volume/__init__.py#L545) and sends a RPC message to scheduler;

3) scheduler picks up the message and makes placement decision and if a back-end is available, it sends the request via RPC to volume service;

4) volume service picks up the message to perform the real job creating a volume for user.

There are multiple cases in which a volume’s status can be stuck in ‘creating’:

a) something wrong happened during RPC message being processed by scheduler (e.g. scheduler service is down refer bug : bug: https://review.openstack.org/#/c/64014/ – related to this change message is lost, scheduler service goes down while scheduler processing the message);

b) something went wrong AFTER backend is chosen, which means scheduler successfully sends out the message to target back-end, but somehow the message isn’t picked up by target volume service or there is unhandled exception while volume service handling the request.

If the cinder volume creation passed api, scheduler and get stuck in creation in volume, restart services again using

 

service openstack-cinder-volume restart
    Cinder services restart commands 

    service openstack-cinder-api restart
    service openstack-cinder-backup restart
    service openstack-cinder-scheduler restart
    service openstack-cinder-volume restart

Thanks !!

Openstack – Cannot connect to instance

SUMMARY

Admin/User has successfully created an instance, however, they are unable to gain access to the instance via SSH or RDP.

 

HOW TO DETERMINE ROOT CAUSE AND SOLVE THE PROBLEM

Generally, this problem comes as a result of misconfiguration of the security group the instance the customer launched is associated with or if the customer is trying to access the instance remotely without having a floating IP assigned. The first thing to do in this case would be to verify that a floating IP has been assigned to the instance if the instance is to be accessed remotely. You can verify this on the instances menu on Horizon. If you do not see a floating IP assigned please proceed to adding one by selecting the actions column drop down menu and selecting “Associate Floating IP”.

Once a floating IP has been verified the next step will be to verify that the security group associated to the instance(s) has the following security group rules within in it:

Additionally, if the instance is Linux based you will want to make sure you have port 22 for SSH open. Additionally, if the the instance is Windows based you will want to open port 3389 for RDP. For example:

If these two rules are open and the connectivity issues persists the suggestion at this point would be to verify if the instance requires a key pair for access. You can check if that is the case by going to the instances pace and checking the Key Pair field to see if it is populated. For example:

If the Key Pair is populated (as it is in this case) you will need to access your instance using a special SSH flag when you execute the SSH command on the command line:

 

root@pandy-dev: ~# ssh -i test.pem root@<server IP/Hostname>

Note that the test.pem file is named exactly like the populated field in the example image above. You must use the same key pair otherwise you will encounter authentication issues with the instance.

If at this point the issue persists, there are a number of things that might be causing this issue. At this point it would be best for you to submit a support ticket and provide us the output of the following from all of your controller nodes:

#pcs status

#nova service-list

#neutron agent-list

#rabbitmqctl status

#nova show <INSTANCE ID>

#nova console-log <INSTANCE ID>

#rabbitmqctl list_queues | grep -v “0$”

#/var/log/nova-all.log

#/var/log/neutron-all.log

#/var/log/rabbitmq/rabbit@node-x.log

Openstack – CREATE A NETWORK AND SUBNET VIA THE CLI

SUMMARY

The OpenStack Networking service provides a scalable system for managing the network connectivity within an OpenStack cloud deployment. It can easily and quickly react to changing network needs (for example, creating and assigning new IP addresses).

Networking in OpenStack is complex. This section provides the basic instructions for creating a network and a router. For detailed information about managing networks, refer to the OpenStack Cloud Administrator Guide.

CREATE A NETWORK AND SUBNET VIA THE CLI

  1. Create Network:

2) Create a Subnet:

The subnet-create command has the following positional and optional parameters:

  • The name or ID of the network to which the subnet belongs.
  • In this example, net1 is a positional argument that specifies the network name.
  • The CIDR of the subnet.
  • In this example, 192.168.2.0/24 is a positional argument that specifies the CIDR.
  • The subnet name, which is optional.
  • In this example, –name subnet1 specifies the name of the subnet.

For information and examples on more advanced use of neutron’s subnet subcommand, see the Cloud Administrator Guide.

CREATE A ROUTER VIA THE CLI

  1. Create a Router:

  1. Link the router to the external provider network:

 

Link the router to the subnet:

Install Packages to run script from cygwin on virtual box

Hi All,

 

Today we are going to talk about, packages required to run shell script or any script on virtual via cygwin

If we want to create new VM in virtual box we can provision via Oracle Virtual box UI, also can create via virtual box shell scripting, assume you prepared with virtual box script (if want to know how prepare virtual box script will post in next blog)

There are few packages required to run script

  • expect, openssh, and procps 

Lets see how to install expect and openssh/ssh, procps

Installation of Cygwin, expect and Ssh
1. Download cygwin from http://www.cygwin.com/install.html
2. Run set up.exe file and select install from internet option
3. Select desired root directory and Local package directory
4. Select your Internet connection as Direct connection
5. From the Download site Choose Available Download site
6. Select packages expect, openssh, and procps  to install
7. For installing expect, procps select  > TCL
8. For installing ssh search by package name openssh and select > Net
9. Click install
10. Once it is installed just enter expect command on prompt which should
display expect1.1

 

Automating login to headend using expect tool installed in Cygwin
1. Create a file namely sshlogin.exp and give execute permissions
2. Update the file with the given code snippet

#!/usr/bin/expect

set timeout 20

set ip [lindex $argv 0]

set user [lindex $argv 1]

set password [lindex $argv 2]

spawn ssh “$user\@$ip”

expect “Password:”

send “$password\r”;

send “cd /export/home/dncsop/Automation_Resources/\n”

send “ls\n”

interact

exit 0

3.Execute the script as ./sshlogin.exp ipaddress username passwd
e.g : ./sshlogin.exp 10.78.203.115 root password

 

Now the packages are installed, to run virtual box script (Desktop/Virtualbox/launch.sh)

Open Cygwin and run

cd /cygdrive/c/Users/{name}/Desktop/virtualbox

sh launch.sh

Now the Procps packages will check the free space like RAM/Disk utility as mentioned in virtual box script and will spin VM instances inside oracle virtual box.

 

Thanks for reading !!

Installing Mirantis Fuel 8.0 on Virtual Box

Hi All,

Today we are going to see about how to install Miranis Fuel 8.0 in laptop

 

Introduction

You can install Fuel on VirtualBox and use that to deploy a Mirantis OpenStack environment for demonstration and evaluation purposes. Mirantis provides scripts that create and configure all the VMs required for a test environment, including the Master node and Slave nodes.

Here we going to discuss about how to run Fuel and Mirantis OpenStack on VirtualBox.

Prerequisites

Running Fuel and Mirantis OpenStack on VirtualBox has a number of prerequisites and dependencies. Before proceding with the deployment steps, please verify whether you meet these requirements:

  1. Run VirtualBox on a stable host system; we recommend 64-bit host OS. The scripts have been tested on Mac OS 10.7.5, Mac OS 10.8.3, Ubuntu 12.04, Ubuntu 12.10, Ubuntu 14.04, Fedora 19, OpenSUSE 12.2/12.3, Windows 7 x64 + Cygwin_x64, and Windows 8 x64 + Cygwin_x64.
  2. Download and install VirtualBox
  3. Download and install VirtualBox extensions.
  4. Download Mirantis VirtualBox scripts from the Downloads tab.
  5. Download the Mirantis OpenStack ISO.

If you want to run these scripts on Windows directly, you should also:

  1. Download and install Cygwin for 64-bit version of Windows.

  2. Select expect, openssh, and procps packages to install.

    To do this, search by the names of the packages required in the Select Packages dialog of the Cygwin install wizard:

    _images/procps.png

Hardware Recommendations: 8 GB+ of RAM

  • Supports 4 VMs for Multi-node OpenStack installation (1 Master node, 1 Controller node, 1 Compute node, 1 Cinder node). The size of each VM should be reduced to 1536 MB RAM. For dedicated Cinder node, 768 MB of RAM is enough.

or

  • Supports 5 VMs for Multi-node with HA OpenStack installation (1 Master node, 3 combined Controller + Cinder nodes, 1 Compute node). The size of each VM should be reduced to 1280 MB RAM. This is less that the recommended amount of RAM amount per node for HA configurations (2048+ MB per controller) and may lead to unwanted issues.

 

Installing Using Automated Scripts

  1. Extract Mirantis VirtualBox scripts. The package should include the following:

    iso

    The directory containing the ISO image used to install Fuel. You should download the ISO from the portal to this directory or copy it into this directory after it is downloaded. If this directory contains more than one ISO file, the installation script uses the most recent one.

    config.sh

    Configuration file that allows you to specify parameters that automate the Fuel installation. For example, you can select how many virtual nodes to launch, as well as how much memory, disk, and processing to allocate for each.

    launch.sh

    This is the script you run to install Fuel. It uses the ISO image from the iso directory, creates a VM, mounts the image, and automatically installs the Fuel Master node. After installing the Master node, the script creates Slave nodes for OpenStack and boots them via PXE from the Master node. When Fuel is installed, the script gives you the IP address to use to access the Web-based UI for Fuel. Use this address to deploy your OpenStack environment.

  2. Add Mirantis OpenStack ISO to the extracted VirtualBox iso folder.

  3. Run the launch.sh script to install Fuel.

    For the Windows users:

    • Navigate to directory with the launch.sh file in Cygwin prompt, for example: cd /cygdrive/c/Users/{name}/Desktop/virtualbox

    • Use the sh {shell script} command to run a shell script in Cygwin:

      sh launch.sh
      

    The Fuel installation is complete when the VirtualBox fuel-master node shows the following details about your environment:

    _images/fuel_master_install.png

  4. See the Launch Wizard to Create New Environment for the instructions on how to log in to the Fuel UI and set up your first environment.

Manual Installation

Note

The following steps are suitable only for setting up a vanilla OpenStack environment for evaluation purposes only.

If you cannot or would rather not run our helper scripts, you can still run Fuel on VirtualBox by following these steps.

Deploying the Master Node Manually

First, create the Master node VM.

  1. Configure the host-only interface vboxnet0 in VirtualBox by going to File -> Preferences -> Network, then on the Host-only Networks tab click the screwdriver icon:

    • IP address: 10.20.0.1
    • Network mask: 255.255.255.0
    • DHCP Server: disabled

    _images/host-only-networks-preferences.png _images/host-only-networks-details.png

  2. Create a VM for the Fuel Master node with the following parameters:

    • OS Type: Linux
    • Version: Ubuntu (64bit)
    • RAM: 1536+ MB (2048+ MB recommended)
    • HDD: 50 GB with dynamic disk expansion
  3. Modify your VM settings:

    • Network: Attach Adapter 1 to Host-only adapter vboxnet0
  4. Power on the VM in order to start the installation. Choose your Fuel ISO when prompted to select start-up disk.

  5. Wait for the Welcome message with all information needed to login into the UI of Fuel.

Adding Slave Nodes Manually

Configure the host-only interfaces.

  1. In the VirtualBox main window, go to File -> Preferences -> Network. On the Host-only Networks tab, click the screwdriver icon.

    • Create network vboxnet1:

      • IP address: 172.16.0.254
      • Network mask: 255.255.255.0
      • DHCP Server: disabled

      _images/vboxnet1.png

    • Сreate network vboxnet2:

      • IP address: 172.16.1.1
      • Network mask: 255.255.255.0
      • DHCP Server: disabled

      _images/vboxnet2.png

Next, create Slave nodes where OpenStack needs to be installed.

  1. Create 3 or 4 additional VMs with the following parameters:

    • OS Type: Linux, Version: Ubuntu (64bit)
    • RAM: 1536+ MB (2048+ MB recommended)
    • HDD: 50+ GB, with dynamic disk expansion
    • Network 1: host-only interface vboxnet0, Intel PRO/1000 MT desktop driver
  2. Set Network as first in the boot order:

    _images/vbox-image1.png

  3. Configure two or more network adapters on each VM (in order to use single network adapter for each VM you should choose Use VLAN Tagging later in the Fuel UI):

    _images/vbox-image2.png

  4. Open Advanced collapse, and set the following options:

    • Set Promiscuous mode to Allow All
    • Set Adapter Type to Intel PRO/1000 MT Desktop
    • Check Cable connected

Fuel Node Setup

  1. Boot Fuel Master Server from the ISO image as a virtual DVD (click here to download ISO image).
  2. Choose option 1. and press the TAB button to edit default options:
  3. a. Remove the default gateway (10.20.0.1).
    b. Change the DNS to 10.20.0.2 (the Fuel VM IP).
    c. Add the following command to the end: “showmenu=yes”
    The tuned boot parameters should look like this:

    Note: Do not try to change eth0 to another interface or the deployment might fail.
    1. Fuel VM will reboot itself after the initial installation is completed and the Fuel menu will appear.
      Note: Ensure that the VM will start from the Local Disk and not CD-ROM. Otherwise you will restart the installation from beginning.
    2. Begin the network setup:
      1. Configure eth0 as the PXE (Admin) network interface.
        Ensure the default Gateway entry is empty for the interface. The network is enclosed within the switch and has no routing outside.
        Select Apply.
      2. Configure eth1 as the Public network interface.
        The interface is routable to LAN/internet and will be used to access the server. Configure the static IP address, netmask and default gateway on the public network interface, Here Eth1 is our vboxnet1, set settings as below

        • IP address: 172.16.0.254
        • Network mask: 255.255.255.0

        Select Apply.

    1. Set the PXE Setup.
      The PXE network is enclosed within the switch. Use the default settings.
      Press the Check button to ensure no errors are found.
    2. Set the Time Sync.
      a. Choose the Time Sync option on the left-hand Menu.
      b. Configure the NTP server entries suitable for your infrastructure.
      c. Press Check to verify settings.
    3. Proceed with the installation.
      Navigate to Quit Setup and select Save and Quit.

      Once the Fuel installation is done, you will see Fuel access details both for SSH and HTTP.
    4. Configure the Fuel Master VM SSH server to allow connections from Public network.By default Fuel will accept SSH connections from Admin(PXE) network only.

      Follow the below steps to allow connections from Public Network:

      1. Use virt-manager to access Fuel Master VM console
      2. Edit sshd_config:

        # vi /etc/ssh/sshd_config

      3. Find and comment this line:

        ListenAddress 10.20.0.2

      4. Restart sshd:

        # service sshd restart

    5. Access Fuel using one of the following ( To access from Desktop do port forwarding with URL and port 8000 or any port to your comfort)
    • Web UI by http://172.16.0.254:8000 (use admin/admin as user/password)
    • SSH by connect to 10.7.208.54 (use root/r00tme as user/password)

 

Here we go !!

 

 

 

 

 

Chocolatey – Windows Package Installer & Manager

Hi All,

Let’s get Chocolatey !!

Today am going to explain about “Chocolatey NuGet” is a Machine Package Manager, somewhat like apt-get, but built with Windows in mind.

Installation

Via Command Prompt (administrator)

C:> @powershell -NoProfile -ExecutionPolicy Bypass -Command “iex ((new-object net.webclient).DownloadString(‘https://chocolatey.org/install.ps1&#8217;))” && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

Via Powershell

PS:> iex ((new-object net.webclient).DownloadString(‘https://chocolatey.org/install.ps1&#8217;))

which downloads and install in your machine.

 

Installing Programs :

We can install “N” of packages in windows via chocolatey, Once you’ve got Chocolatey up and running, it’s time to start installing programs. Open an administrative command prompt again and type cinst [program name]

cinst <package name>

Example : To install vlc player, type below command with “-y” which confirms your installation permission

Cinstall vlc -y

 

Can search programs/pacakges with hint

choco search (keyword)

 

Multiple installs

There are two ways to install multiple programs in one sitting with Chocolatey. The first is to type multiple arguments into the command line. If you wanted to install VLC, GIMP, and Firefox you’d type:

cinst vlc gimp Firefox

 

 Updating :

Updating installed programs via Chocolatey is simple too. Type cup [program name]into an administrative command. To update DosBox, for example, type:

cup vlc

You can also update all your programs by typing cup all. If your package is using an alternative source other than the main Chocolatey package feed, you can type:

cup [package name] –source 

Uninstalling

 

Uninstalling a package is a little different. Going back to our example, you’d type the following to uninstall DosBox:

choco uninstall vlc

Will update few more after my test…

Install OpenStack (Kilo) VirtualBox

Hi All,

Today we are going to setup openstack in single node, devstack will do the needful

Prerequisites for Install:

Create Ubuntu VM in Oracle Virtual Box

 

  1. Open VirtualBox and select New . A new window will come out.
  2. Choose your guest OS and architecture (32 vs. 64 bit, e.g select Ubuntu)
  3. Set your Base Memory (RAM)
  4. Click next until it show the vm storage size. Put how much space you need depending on your hardisk and finish the wizard by clicking the create button.
  5. On VirtualBox main window, select START and pick your MEDIA SOURCE. In your case, select iso on your desktop.
  6. Finish the installation as normal install.
  7. Remove your installation .iso from the virtual optical disk drive before restarting the VM.
  8. Install Guest Additions.

Follow this guide:

Open Virtualbox and click at New button.

enter image description here

Setup Wizard will appear and click at Next button.

enter image description here

Enter your Virtual Machine name, and choose your guest OS and architecture (32- vs. 64-bit) from the dropdown menu and click Next button.

A 64-bit guest needs the CPU virtualization technology (VT-x AMD/V) to be enabled in BIOS.

enter image description here

Enter memory (RAM) to reserve for your virtual machine and click Next button.

Leave enough memory to the host OS.

enter image description here

Tick at Startup Disk and Create New Hard disk and click at Next button.

enter image description here

Choose the type of file that you want to use for virtual disk and click Next button.

enter image description here

Choose your storage detail and click Next button.

enter image description here

Enter the size of your virtual disk (in MB) and click Next button.

A dynamically growing virtual disk will only use the amount of physical hard drive space it needs. It is better to be rather generous to avoid running out of guest hard drive space.

enter image description here

You will see the detail of your input here. Click Create button to continue.

enter image description here

The “New Virtual Machine Wizard” will close and back to VirtualBox Manager. Select your Virtual Machine and click Start button.

enter image description here

“First Run Wizard” will appear and click Next button.

enter image description here

Click at ‘folder’ icon and choose your Ubuntu iso directory.

enter image description here

Select your Ubuntu iso file and click Next button.

enter image description here

In ‘Summery’ box, click Start button.

enter image description here

This screen will appear when it start boot.

enter image description here

After a successful installation we have to remove our installation .iso image from the virtual optical drive before we reboot. This can be done from the “Devices” menu or by removing the .iso from the VM settings:

enter image description here

Now ubuntu VM is created , open terminal in ubuntu, run  below commands

 

# sudo apt-get update
# sudo apt-get upgrade
sudo apt-get dist-upgrade
sudo reboot
sudo apt-get install git
git clone -b stable/kilo https://github.com/openstack-dev/devstack.git

it will download Kilo version of openstack, then below commands in terminal

  • cd devstack
  • wget https://dl.dropboxusercontent.com/u/44260569/local.conf
  • ./stack.sh

Now devstack will get installed as below, now you can play around with openstack

For  Youtube Video : Click here Openstack, Thanks Saju:)

reference : Openstack

Here we go you are playing with openstack.