Mounting Windows Folder to Linux

To mount folder from windows to LINUX

=============================

1- create folder on windows

2- share this folder with permission

3-turn on discovery network Linux 1-apt-get install cifs-utils 2- mkdir /mount-point 3- mount.cifs //ip-server/share-folder /mount-point -o user=admin

4- enter password for user admin

5- after that add in /etc/fstab 6- //windows/share-folder /mount-point cifs user=admin,password=admin,_netdev 0 0

#Good luck

Creation of Windows Images – Openstack

If you want to build a Windows image for use in your OpenStack environment, you can follow the example in the official documentation, or you can grab a Windows 2012r2 evaluation pre-built image from the nice folks atCloudBase.

The CloudBase-provided image is built using a set of scripts and configuration files that CloudBase has made available on GitHub.

The CloudBase repository is an excellent source of information, but I wanted to understand the process myself. This post describes the process I went through to establish an automated process for generating a Windows image suitable for use with OpenStack.

Unattended windows installs

The Windows installer supports fully automated installations through the use of an answer file, or “unattend” file, that provides information to the installer that would otherwise be provided manually. The installer will look in a number of places to find this file. For our purposes, the important fact is that the installer will look for a file namedautounattend.xml in the root of all available read/write or read-only media. We’ll take advantage of this by creating a file config/autounattend.xml, and then generating an ISO image like this:

mkisofs -J -r -o config.iso config

And we’ll attach this ISO to a vm later on in order to provide the answer file to the installer.

So, what goes into this answer file?

The answer file is an XML document enclosed in an <unattend>..</unattend> element. In order to provide all the expected XML namespaces that may be used in the document, you would typically start with something like this:

<?xml version="1.0" ?>
<unattend
  xmlns="urn:schemas-microsoft-com:unattend"
  xmlns:ms="urn:schemas-microsoft-com:asm.v3"
  xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State">

  <!-- your content goes here -->

</unattend>

Inside this <unattend> element you will put one or more <settings> elements, corresponding to the differentconfiguration passes of the installer:

<settings pass="specialize">
</settings>

The available configuration passes are:

Of these, the most interesting for our use will be:

  • windowsPE — used to install device drivers for use within the installer environment. We will use this to install the VirtIO drivers necessary to make VirtIO devices visible to the Windows installer.
  • specialize — In this pass, the installer applies machine-specific configuration. This is typically used to configure networking, locale settings, and most other things.
  • oobeSystem — In this pass, the installer configures things that happen at first boot. We use this to step to install some additional software and run sysprep in order to prepare the image for use in OpenStack.

Inside each <settings> element we will place one or more <component> elements that will apply specific pieces of configuration. For example, the following <component> configures language and keyboard settings in the installer:

<settings pass="windowsPE">
  <component name="Microsoft-Windows-International-Core-WinPE"
    processorArchitecture="amd64"
    publicKeyToken="31bf3856ad364e35"
    language="neutral"
    versionScope="nonSxS">

    <SetupUILanguage>
      <UILanguage>en-US</UILanguage>
    </SetupUILanguage>
    <InputLocale>en-US</InputLocale>
    <UILanguage>en-US</UILanguage>
    <SystemLocale>en-US</SystemLocale>
    <UserLocale>en-US</UserLocale>
  </component>
</settings>

Technet provides documentation on the available components.

Cloud-init for Windows

Cloud-init is a tool that will configure a virtual instance when it first boots, using metadata provided by the cloud service provider. For example, when booting a Linux instance under OpenStack, cloud-init will contact the OpenStack metadata service at http://169.254.169.254/ in order to retrieve things like the system hostname, SSH keys, and so forth.

While cloud-init has support for Linux and BSD, it does not support Windows. The folks at Cloudbase have produced cloudbase-init in order to fill this gap. Once installed, the cloudbase-init tool will, upon first booting a system:

  • Configure the network using information provided in the cloud metadata
  • Set the system hostname
  • Create an initial user account (by default “Admin”) with a randomly generated password (see below for details)
  • Install your public key, if provided
  • Execute a script provided via cloud user-data

Passwords and ssh keys

While cloudbase-init will install your SSH public key (by default into /Users/admin/.ssh/authorized_keys), Windows does not ship with an SSH server and cloudbase-init does not install one. So what is it doing with the public key?

While you could arrange to install an ssh server that would make use of the key, cloudbase-init uses it for a completely unrelated purpose: encrypting the randomly generated password. This encrypted password is then passed back to OpenStack, where you can retrieve it using the nova get-password command, and decrypt it using the corresponding SSH private key.

Running nova get-password myinstance will return something like:

w+In/P6+FeE8nv45oCjc5/Bohq4adqzoycwb9hOy9dlmuYbz0hiV923WW0fL
7hvQcZnWqGY7xLNnbJAeRFiSwv/MWvF3Sq8T0/IWhi6wBhAiVOxM95yjwIit
/L1Fm0TBARjoBuo+xq44YHpep1qzh4frsOo7TxvMHCOtibKTaLyCsioHjRaQ
dHk+uVFM1E0VIXyiqCdj421JoJzg32DqqeQTJJMqT9JiOL3FT26Y4XkVyJvI
vtUCQteIbd4jFtv3wEErJZKHgxHTLEYK+h67nTA4rXpvYVyKw9F8Qwj7JBTj
UJqp1syEqTR5/DUHYS+NoSdONUa+K7hhtSSs0bS1ghQuAdx2ifIA7XQ5eMRS
sXC4JH3d+wwtq4OmYYSOQkjmpKD8s5d4TgtG2dK8/l9B/1HTXa6qqcOw9va7
oUGGws3XuFEVq9DYmQ5NF54N7FU7NVl9UuRW3WTf4Q3q8VwJ4tDrmFSct6oG
2liJ8s7ybbW5PQU/lJe0gGBGGFzo8c+Rur17nsZ01+309JPEUKqUQT/uEg55
ziOo8uAwPvInvPkbxjH5doH79t47Erb3cK44kuqZy7J0RdDPtPr2Jel4NaSt
oCs+P26QF2NVOugsY9O/ugYfZWoEMUZuiwNWCWBqrIohB8JHcItIBQKBdCeY
7ORjotJU+4qAhADgfbkTqwo=

Providing your secret key as an additional parameter will decrypt the password:

$ nova get-password myinstance ~/.ssh/id_rsa
fjgJmUB7fXF6wo

With an appropriately configured image, you could connect using an RDP client and log in as the “Admin” user using that password.

Passwords without ssh keys

If you do not provide your instance with an SSH key you will not be able to retrieve the randomly generated password. However, if you can get console access to your instance (e.g., via the Horizon dashboard), you can log in as the “Administrator” user, at which point you will be prompted to set an initial password for that account.

Logging

You can find logs for cloudbase-init in c:\program files (x86)\cloudbase solutions\cloudbase-init\log\cloudbase-init.log.

If appropriately configured, cloudbase-init will also log to the virtual serial port. This log is available in OpenStack by running nova console-log <instance>. For example:

$ nova console-log my-windows-server
2014-11-19 04:10:45.887 1272 INFO cloudbaseinit.init [-] Metadata service loaded: 'HttpService'
2014-11-19 04:10:46.339 1272 INFO cloudbaseinit.init [-] Executing plugin 'MTUPlugin'
2014-11-19 04:10:46.371 1272 INFO cloudbaseinit.init [-] Executing plugin 'NTPClientPlugin'
2014-11-19 04:10:46.387 1272 INFO cloudbaseinit.init [-] Executing plugin 'SetHostNamePlugin'
.
.
.

Putting it all together

I have an install script that drives the process, but it’s ultimately just a wrapper for virt-install and results in the following invocation:

exec virt-install -n ws2012 -r 2048 \
  -w network=default,model=virtio \
  --disk path=$TARGET_IMAGE,bus=virtio \
  --cdrom $WINDOWS_IMAGE \
  --disk path=$VIRTIO_IMAGE,device=cdrom \
  --disk path=$CONFIG_IMAGE,device=cdrom \
  --os-type windows \
  --os-variant win2k8 \
  --vnc \
  --console pty

Where TARGET_IMAGE is the name of a pre-existing qcow2 image onto which we will install Windows, WINDOWS_IMAGE is the path to an ISO containing Windows Server 2012r2, VIRTIO_IMAGE is the path to an ISO containing VirtIO drivers for Windows (available from the Fedora project), and CONFIG_IMAGE is a path to the ISO containing our autounattend.xmlfile.

The fully commented autounattend.xml file, along with the script mentioned above, are available in my windows-openstack-image repository on GitHub.

The answer file in detail

windowsPE

In the windowsPE phase, we start by configuring the installer locale settings:

<component name="Microsoft-Windows-International-Core-WinPE"
  processorArchitecture="amd64"
  publicKeyToken="31bf3856ad364e35"
  language="neutral"
  versionScope="nonSxS">

  <SetupUILanguage>
    <UILanguage>en-US</UILanguage>
  </SetupUILanguage>
  <InputLocale>en-US</InputLocale>
  <UILanguage>en-US</UILanguage>
  <SystemLocale>en-US</SystemLocale>
  <UserLocale>en-US</UserLocale>

</component>

And installing the VirtIO drviers using the Microsoft-Windows-PnpCustomizationsWinPE component:

<component name="Microsoft-Windows-PnpCustomizationsWinPE"
  publicKeyToken="31bf3856ad364e35" language="neutral"
  versionScope="nonSxS" processorArchitecture="amd64">

  <DriverPaths>
    <PathAndCredentials wcm:action="add" wcm:keyValue="1">
      <Path>d:\win8\amd64</Path>
    </PathAndCredentials>
  </DriverPaths>

</component>

This assumes that the VirtIO image is mounted as drive d:.

With the drivers installed, we can then call the Microsoft-Windows-Setup component to configure the disks and install Windows. We start by configuring the product key:

<component name="Microsoft-Windows-Setup"
  publicKeyToken="31bf3856ad364e35"
  language="neutral"
  versionScope="nonSxS"
  processorArchitecture="amd64">

  <UserData>
    <AcceptEula>true</AcceptEula>
    <ProductKey>
      <WillShowUI>OnError</WillShowUI>
      <Key>INSERT-PRODUCT-KEY-HERE</Key>
    </ProductKey>
  </UserData>

And then configure the disk with a single partition (that will grow to fill all the available space) which we then format with NTFS:

  <DiskConfiguration>
    <WillShowUI>OnError</WillShowUI>
    <Disk wcm:action="add">
      <DiskID>0</DiskID>
      <WillWipeDisk>true</WillWipeDisk>

      <CreatePartitions>
        <CreatePartition wcm:action="add">
          <Order>1</Order>
          <Extend>true</Extend>
          <Type>Primary</Type>
        </CreatePartition>
      </CreatePartitions>

      <ModifyPartitions>
        <ModifyPartition wcm:action="add">
          <Format>NTFS</Format>
          <Order>1</Order>
          <PartitionID>1</PartitionID>
          <Label>System</Label>
        </ModifyPartition>
      </ModifyPartitions>
    </Disk>
  </DiskConfiguration>

We provide information about what to install:

  <ImageInstall>
    <OSImage>
      <WillShowUI>Never</WillShowUI>

      <InstallFrom>
        <MetaData>
          <Key>/IMAGE/Name</Key>
          <Value>Windows Server 2012 R2 SERVERSTANDARDCORE</Value>
        </MetaData>
      </InstallFrom>

And where we would like it installed:

      <InstallTo>
        <DiskID>0</DiskID>
        <PartitionID>1</PartitionID>
      </InstallTo>
    </OSImage>
  </ImageInstall>

specialize

In the specialize phase, we start by setting the system name to a randomly generated value using the Microsoft-Windows-Shell-Setup component:

<component name="Microsoft-Windows-Shell-Setup"
  publicKeyToken="31bf3856ad364e35" language="neutral"
  versionScope="nonSxS" processorArchitecture="amd64">
  <ComputerName>*</ComputerName>
</component>

We enable remote desktop because in an OpenStack environment this will probably be the preferred mechanism with which to connect to the host (but see this document for an alternative mechanism).

First, we need to permit terminal server connections:

<component name="Microsoft-Windows-TerminalServices-LocalSessionManager"
  processorArchitecture="amd64"
  publicKeyToken="31bf3856ad364e35"
  language="neutral"
  versionScope="nonSxS">
  <fDenyTSConnections>false</fDenyTSConnections>
</component>

And we do not want to require network-level authentication prior to connecting:

<component name="Microsoft-Windows-TerminalServices-RDP-WinStationExtensions"
  processorArchitecture="amd64"
  publicKeyToken="31bf3856ad364e35"
  language="neutral"
  versionScope="nonSxS">
  <UserAuthentication>0</UserAuthentication>
</component>

We will also need to open the necessary firewall group:

<component name="Networking-MPSSVC-Svc"
  processorArchitecture="amd64"
  publicKeyToken="31bf3856ad364e35"
  language="neutral"
  versionScope="nonSxS">
  <FirewallGroups>
    <FirewallGroup wcm:action="add" wcm:keyValue="RemoteDesktop">
      <Active>true</Active>
      <Profile>all</Profile>
      <Group>@FirewallAPI.dll,-28752</Group>
    </FirewallGroup>
  </FirewallGroups>
</component>

Finally, we use the Microsoft-Windows-Deployment component to configure the Windows firewall to permit ICMP traffic:

<component name="Microsoft-Windows-Deployment"
  processorArchitecture="amd64"
  publicKeyToken="31bf3856ad364e35"
  language="neutral" versionScope="nonSxS">

  <RunSynchronous>

    <RunSynchronousCommand wcm:action="add">
      <Order>3</Order>
      <Path>netsh advfirewall firewall add rule name=ICMP protocol=icmpv4 dir=in action=allow</Path>
    </RunSynchronousCommand>

And to download the cloudbase-init installer and make it available for later steps:

    <RunSynchronousCommand wcm:action="add">
      <Order>5</Order>
      <Path>powershell -NoLogo -Command "(new-object System.Net.WebClient).DownloadFile('https://www.cloudbase.it/downloads/CloudbaseInitSetup_Beta_x64.msi', 'c:\Windows\Temp\cloudbase.msi')"</Path>
    </RunSynchronousCommand>
  </RunSynchronous>
</component>

We’re using Powershell here because it has convenient methods available for downloading URLs to local files. This is roughly equivalent to using curl on a Linux system.

oobeSystem

In the oobeSystem phase, we configure an automatic login for the Administrator user:

  <UserAccounts>
    <AdministratorPassword>
      <Value>Passw0rd</Value>
      <PlainText>true</PlainText>
    </AdministratorPassword>
  </UserAccounts>
  <AutoLogon>
    <Password>
      <Value>Passw0rd</Value>
      <PlainText>true</PlainText>
    </Password>
    <Enabled>true</Enabled>
    <LogonCount>50</LogonCount>
    <Username>Administrator</Username>
  </AutoLogon>

This automatic login only happens once, because we configure FirstLogonCommands that will first install cloudbase-init:

  <FirstLogonCommands>
    <SynchronousCommand wcm:action="add">
      <CommandLine>msiexec /i c:\windows\temp\cloudbase.msi /qb /l*v c:\windows\temp\cloudbase.log LOGGINGSERIALPORTNAME=COM1</CommandLine>
      <Order>1</Order>
    </SynchronousCommand>

And will then run sysprep to generalize the system (which will, among other things, lose the administrator password):

    <SynchronousCommand wcm:action="add">
      <CommandLine>c:\windows\system32\sysprep\sysprep /generalize /oobe /shutdown</CommandLine>
      <Order>2</Order>
    </SynchronousCommand>
  </FirstLogonCommands>

The system will shut down when sysprep is complete, leaving you with a Windows image suitable for uploading into OpenStack:

glance image-create --name ws2012 \
  --disk-format qcow2 \
  --container-format bare  \
  --file ws2012.qcow2

Troubleshooting

If you run into problems with an unattended Windows installation:

During the first stage of the installer, you can look in the x:\windows\panther directory for setupact.log andsetuperr.log, which will have information about the early install process. The x: drive is temporary, and files here will be discarded when the system reboots.

Subsequent installer stages will log to c:\windows\panther\.

If you are unfamiliar with Windows, the type command can be used very much like the cat command on Linux, and the more command provides paging as you would expect. The notepad command will open a GUI text editor/viewer.

You can emulate the tail command using powershell; to see the last 10 lines of a file:

C:\> powershell -command "Get-Content setupact.log -Tail 10"

Technet has a Deployment Troubleshooting and Log Files document that discusses in more detail what is logged and where to find it.


Quickstart – Docker for Beginners

docker logo

What is Docker?

Wow! That’s a mouthful. In simpler words, Docker is a tool that allows developers, sys-admins etc. to easily deploy their applications in a sandbox (called containers) to run on the host operating system i.e. Linux. The key benefit of Docker is that it allows users to package an application with all of its dependencies into a standardized unit for software development. Unlike virtual machines, containers do not have the high overhead and hence enable more efficient usage of the underlying system and resources.

What are containers?

The industry standard today is to use Virtual Machines (VMs) to run software applications. VMs run applications inside a guest Operating System, which runs on virtual hardware powered by the server’s host OS.

VMs are great at providing full process isolation for applications: there are very few ways a problem in the host operating system can affect the software running in the guest operating system, and vice-versa. But this isolation comes at great cost — the computational overhead spent virtualizing hardware for a guest OS to use is substantial.

Containers take a different approach: by leveraging the low-level mechanics of the host operating system, containers provide most of the isolation of virtual machines at a fraction of the computing power.

 

Goal of this Tutorial :

Here we are going to see Docker Installation, basic configuration, First image preparation with web application and few easy troubleshooting.

How to Install in Ubuntu 14: 04

Refer Install to see how to install different OS types, here going to see with Ubuntu

  • In Ubuntu
    sudo apt-get install docker.io

Check Version

sudo docker version

Pull an Ubuntu Trusty docker image

sudo docker pull ubuntu:14.04

Find Image repo in Dockerhub

After pulling ubuntu images, do list by

root@docker:/home/pandy# docker images

REPOSITORY  TAG          IMAGE ID             CREATED              VIRTUAL SIZE
ubuntu             14.04          a572fb20fc42     11 days ago              188 MB

Run a docker image, and execute command ‘echo “pandy”’ in the docker container created out of that image

sudo docker run ubuntu:14:04 echo “pandy”

Container information is stored in /var/lib/docker

If you run the above command multiple times, it will create a new container each time.

To know the ID of the last container, run

sudo docker ps -l

To list all the running containers

sudo docker ps

Note that the above command will not show the container we last run, because the container which we ran last time terminated just after it finished executing echo command.

Lets try differently

root@docker:/home/pandy# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

8fc688183401 pandy/echo:latest “sh” 2 hours ago Exited (0) 2 hours ago agitated_fermat
22bf8556f207 hello-world:latest “/hello” 2 hours ago Exited (0) 2 hours ago clever_turing
0de9bfa9eab5 pandy/echo:latest “ls -ltrh” 3 hours ago Exited (0) 3 hours ago jovial_brown

The above command will show recent commands executed inside images and logoff period, Do notice that the STATUS column shows that these containers exited a few minutes ago.

Create a new docker image by name <yourname>/echo by ‘committing’ the last container which you ran

sudo docker commit <container ID> <yourname>/echo

Now running sudo docker images will list you two containers instead of one

Now you can run this new docker container like this:

sudo docker run <yourname>/echo ls -alrth

If we installed something, or created a file in the old container, it will be visible now in this container too.

Get more information about a docker image or a running container:

sudo docker inspect <yourname>/echo

To push docker image to docker repository

sudo docker push <yourname>/echo

To download ubuntu Trusty base image if not present locally, and open a shell session into it

sudo docker run -t -i ubuntu:14.04 /bin/bash

-i i.e. –interactive=false, keeps STDIN open even if not attached

-t i.e. –tty=false allocates a pseudo tty

Don’t worry what these mean. If you add these options, you’ll see that you already get logged in into the container shell, and the container only dies off once you exit from that session (usually by writing exit or pressing CTRL + D.

To remove an image:

sudo docker rmi <image name>

Terminology

In the last section, we used a lot of Docker-specific jargon which might be confusing to some. So before we go further, let me clarify some terminology that is used frequently in the Docker ecosystem.

  • Images – The blueprints of our application which form the basis of containers. In the demo above, we used the docker pull command to download the busybox image.
  • Containers – Created from Docker images and run the actual application. We create a container using docker run which we did using the busybox image that we downloaded. A list of running containers can be seen using the docker pscommand.
  • Docker Daemon – The background service running on the host that manages building, running and distributing Docker containers. The daemon is the process that runs in the operation system to which clients talk to.
  • Docker Client – The command line tool that allows the user to interact with the daemon. More generally, there can be other forms of clients too – such as Kitematic which provide a GUI to the users.
  • Docker hub – A registry of Docker images. You can think of the registry as a directory of all available Docker images. If required, one can host their own Docker registries and can use them for pulling images.

Lets see in next blog how to build web application in docker container and use in cloud platform

Build VM Images by Diskimage-builder

Disk Image Builder

Earlier in Openstack manually prepared script to prepare images with ram disks, disk images inputs, but disk image builder tool is a flexible suite of components for building a wide-range of disk images, filesystem images and ramdisk images for use with OpenStack.

Installation

Can do either via directly running out of the source repository or installed via pip, here the steps

git clone http://github.com/openstack/diskimage-builder
cd diskimage-builder

pip install -U -r requirements.txt
pip install -e

 

The above will requirements mentioned in that file

Babel!=2.3.0,!=2.3.1,!=2.3.2,!=2.3.3,>=1.3 # BSD
dib-utils # Apache-2.0
pbr>=1.6 # Apache-2.0
PyYAML>=3.1.0 # MIT
flake8<2.6.0,>=2.5.4 # MIT
six>=1.9.0 # MIT

Now all the binaries are lying in “diskimage-builder/bin”, can run from this path or can export it to $PATH as below

export PATH=$PATH:$(pwd)/diskimage-builder/bin

Now setup is ready, ready to prepare images

Openstack -Ansible -Containers

Summary :

Openstack + Ansible + Containers

 

Project Details

Project Repository
Project Documentation

Deploying Openstack with Ansible

In the video, Rackspace developers explained why they were using Ansible for their Openstack Deployments. The result [for me] was a few profound takeaways:

  1. The Ansible deployment uses all upstream GitHub Openstack native repositories; this is a clean, unmodified Openstack native deployment. This means no management wrapper APIs, no “mysterious” wrapper for starting/stoping Linux services, and I can build in my own upstream projects (no plugins that force me to rebuild my environment from scratch)
  2. The deployment was production ready (unlike DevStack)
  3. The deployment was production scale-able (unlike DevStack)
  4. The deployment uses LXC containers as the deployment unit of management (so developers are presented with a more “true” Openstack development framework)
  5. Users can easily modify their Git sources (public or private)

Dude, seriously?! So yeah, these guys got my attention. My thoughts, “I’ll check out Ansible when I get a chance,” because at the time, my plate was pretty full like everyone else’s.

Ansible for Deploying and Orchestrating Openstack

A few months passed before I could actually start looking into this “magical deployment” of an extremely complicated Cloud environment, but before I got hit with another reminder of what I was missing. At the very next Openstack Summit in Tokyo, a few folks compared all of the popular orchestration too, including the new “cool kids” on the scene, Ansible and Salt.

Comparing Ansible, Puppet, Salt and Chef. What’s best for deploying and Managing Openstack

Again, I had some very impactful take-aways from watching another Openstack Summit video! (I’m feeling great at this point).

If the solution is too complex, your company could ultimately lose some of it’s intended audience. Sysops teams simply won’t use it, and if approached incorrectly (without formal training or operational buy-in) you’ll quickly realize that you’ve wasted resources; both money and peoples time. Sure you have six or seven really strong users of your awesome, programmatic-driven IaC “product X”. But if your team has twenty engineers, and those six have become dedicated to writing your deployment plans, what are you gaining? I feel like this is wasteful, and it can even impact team morale. Additionally, now you’re going to be tasked with finding talent who can orchestrate with “product X”.

Ansible is very operations-driven and the learning curve is extremely easy. If you’re hiring sysops personnel, then using Ansible will be the most natural for them. Here is a list of considerations when looking at the learning curve for some IaC solutions:

  1. Ansible: Which takes very little time to learn, and is very operations-focused (operations driven, YAML or CLI-direct-based tasks, uses playbooks in sequence to perform tasks)
  2. Salt: Which breaks into the client-less barrier (very fast, efficient code that uses the concepts of grains, pillars, etc)
  3. Puppet: Which starts to introduce YAML concepts (client/server based model, with modules to perform tasks)
  4. Chef: Requires Ruby knowledge for many functions, and uses cooking references (cookbooks, recipes, knife to perform certain functions,
    etc)

Then there’s the part about Openstack Support; meaning who has the most modules for supporting deployments via Openstack, and creating Openstack clusters with the solution itself, the order is as follows:

  1. Ansible: Which is most supported by the Openstack Governance and has two massive projects provided by Ansible:
  2. Puppet: Which is supported by RDO and integrates well with ForeMan to provide Openstack Deployments
  3. Chef: For its Openstack modules/support
  4. Salt: Which doesn’t have great Openstack Module support, and doesn’t have many projects to deploy “vanilla” Openstack Deployments.

Is Ansible the perfect “be-all, end-all”? Of course not. Ansible does seem to treat Openstack as a “first class citizen” along with Puppet, but it seems to beat everyone in terms of general user ease of adoption.

NOTE: One way to see these results on which tools are the most used by the Openstack project for yourself (which is publicly view-able on GitHub), go tohttps://github.com/openstack and filter the repositories for “Ansible”, “Puppet”, “Chef” and “Salt” to see what is actually being built via these automation tools. This spoke volumes to me, when I was trying to find the right tool.

Openstack Deployment Options

There are a bunch of options to describe here. Pick yours below.

Manual: Openstack AIO in Less Than 20 Commands

So you’re tired of the background and you just want the deployment? Fair enough. Let’s get to it.

Install a base Ubuntu 14.04 LTE environment (VM or bare-metal) on your server. Make sure you at least have OpenSSH installed.

Steps 1 and 2 Make some basic host preparations. Update your hosts file for this machine hosts, and any others that may need to be defined (minimal is fine), and make sure that your host has a correct DNS configuration.

    ubuntu@megatron:~$ sudo vi /etc/hosts
    ubuntu@megatron:~$ sudo vi /etc/resolv.conf

Steps 3 – 7 Next, update the host, install and start NTP, configure NTP for your correct timezone, and then install prerequisite packages that the Openstack-Ansible project will require. NOTE: If IPv6 is timing out for your host, then you will need to add ‘-o Acquire::ForceIPv4=true’ at the end of every single command (this means just before each ‘&&’).

    ubuntu@megatron:~$ sudo apt-get update -y && sudo apt-get upgrade -y && sudo apt-get install -y ntp
    ubuntu@megatron:~$ sudo service ntp reload && sudo service ntp restart
    ubuntu@megatron:~$ echo "America/New_York" | sudo tee /etc/timezone
    ubuntu@megatron:~$ sudo dpkg-reconfigure --frontend noninteractive tzdata
    ubuntu@megatron:~$ sudo apt-get install -y python-pip git bridge-utils debootstrap ifenslave ifenslave-2.6 lsof lvm2 ntp ntpdate openssh-server sudo tcpdump vlan

Steps 8 – 11 THIS NEXT SECTION IS FOR ADDING VLAN INTERFACES TO YOUR HOST. IF YOU DON’T NEED THIS SUPPORT, OR IF YOU’RE UNSURE, SKIP IT! Next, you will NEED to be root to make the following changes (simply sudo as a privileged user will not work)! Add the following lines to /etc/modules, and then you must reboot.

    ubuntu@megatron:~$ sudo su -
    ubuntu@megatron:~$ echo 'bonding' >> /etc/modules
    ubuntu@megatron:~$ echo '8021q' >> /etc/modules
    ubuntu@megatron:~$ sudo reboot

Steps 12 – 17 Finally, run the Openstack-Ansible specific commands (once your machine is back online), and you’ll be rockin’ Openstack (in about 30-60 minutes, depending on your machine, VM, etc). NOTE: means either icehouse, juno, kilo, liberty, etc.

    ubuntu@megatron:~$ sudo su -
    ubuntu@megatron:~$ git clone -b <TAG> https://github.com/openstack/openstack-ansible.git /opt/openstack-ansible
    ubuntu@megatron:~$ cd /opt/openstack-ansible
    ubuntu@megatron:/opt/openstack-ansible$ scripts/bootstrap-ansible.sh
    ubuntu@megatron:/opt/openstack-ansible$ scripts/bootstrap-aio.sh
    ubuntu@megatron:/opt/openstack-ansible$ scripts/run-playbooks.sh

Once you’ve run scripts/run-playbooks.sh the entire process will take anywhere from 40-120 minutes to complete. So I would recommend going to get a coffee, or continue reading below.

NOTE: The first thing you’re probably going to do is log into Horizon. To view all of the randomly generated passwords, refer to the file /etc/Openstack_deploy/user_secrets.yml

Sidenote: Selecting Another Version to Deploy
Maybe you want to use a different version of Openstack? This project is perfect for that, and you can do it by selecting a different branch before you deploy (Step 13). Openstack-Ansible is intended to be completely customizable, even down to the upstream project repositories.

Make sure you’re in the directory /opt/openstack-ansible/ before reviewing or checking out a new branch.

    ubuntu@megatron:/opt/openstack-ansible$ git branch -a
    * liberty
    remotes/origin/HEAD -> origin/master
    remotes/origin/icehouse
    remotes/origin/juno
    remotes/origin/kilo
    remotes/origin/liberty
    remotes/origin/master
    ubuntu@megatron:/opt/openstack-ansible$

You can also select a specific tag if you want a specific sub-version within the branch. Here is a list of useable/selectable tag options:

Tags and Branches

Sidenote: Selecting Another Version to Deploy
If you want more information about your current branch, use the -v flag.

    root@megatron:/opt/openstack-ansible# git branch -v
    * liberty 3bfe3ec Merge "Provide logrotate config for rsyncd on Swift storage hosts" into liberty
     root@megatron:/opt/openstack-ansible#
Cloud-Init: Cloud in a Cloud (AWS, Azure, Openstack)

So you want the Openstack, AWS, Azure version? Use a single Cloud-Init configuration.

#cloud-config
apt_mirror: http://mirror.rackspace.com/ubuntu/  
package_upgrade: true  
packages:  
- git-core
runcmd:  
- export ANSIBLE_FORCE_COLOR=true
- export PYTHONUNBUFFERED=1
- export REPO=https://github.com/openstack/openstack-ansible
- export BRANCH=liberty
- git clone -b ${BRANCH} ${REPO} /opt/openstack-ansible
- cd /opt/openstack-ansible && scripts/bootstrap-ansible.sh
- cd /opt/openstack-ansible && scripts/bootstrap-aio.sh
- cd /opt/openstack-ansible && scripts/run-playbooks.sh
output: { all: '| tee -a /var/log/cloud-init-output.log' }  

That’s it!

Ansible Tower: Fully Customizable and Manageable

This is a project that I’m currently working on, and Tyler Cross from Ansible has been an amazing resource in getting me started. When I first started using Ansible, I was curious about Ansible Tower. Tower is a management and orchestration platform for Ansible. The Openstack-Ansible project lends itself really well for Ansible Tower because of the deployment flow; they’re using variables nearly everywhere throughout. That allows us to build on top of the upstream projects, supports backwards compatibility with older repository deployments, and it allows us to completely customize [nearly] anything about our deployment. This will take some work, but this is all possible because of their amazing Ansible practices!

If you would like to contribute to this effort, please send me an email! I am still learning my way through Ansible and Ansible Tower, but if there is something that you would like to be implemented as a customizable variable don’t hesitate to ask me for contribution access.

Here is the link to the project: Openstack-Ansible Deployment using Ansible Tower

Heat: Openstack for Openstack Environments

There are heat orchestration deployment options too, and I will come back to document this later.

Simple: Single Command Deployment

There is also a single command to deploy an AIO instances (without Cloud-Config) to get you started. It’s really this simple; are you convinced yet?

curl https://raw.githubusercontent.com/openstack/openstack-ansible/liberty/scripts/run-aio-build.sh | sudo bash  

Mind blown, right? So let’s start talking about how we support his environment, and get into the guts of the post-deployment tasks.

Exploring the Environment

You’ve deployed the beast successfully, but now it’s time to understand whatyou deployed.

The Containers

It’s time for us to look around, and see if we like this deployment. The first thing you’re going to notice, is that there are…containers? That’s right, this whole deployment is scalable at a production level and designed to use containers as the unit of scale. You can choose not to use containers, but there’s really no reason to deploy via native services. The container framework not only works, but it works well!

JINKITOSXLT01:~ bjozsa$ ssh megatron  
bjozsa@megatron's password:  
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.19.0-25-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Wed Feb  3 10:48:05 EST 2016

  System load:     0.29               IP address for p4p1:       192.168.1.25
  Usage of /:      62.6% of 78.28GB   IP address for br-mgmt:    172.29.236.100
  Memory usage:    29%                IP address for br-storage: 172.29.244.100
  Swap usage:      0%                 IP address for br-vlan:    172.29.248.100
  Processes:       876                IP address for br-vxlan:   172.29.240.100
  Users logged in: 0                  IP address for lxcbr0:     10.255.255.1

  => There are 2 zombie processes.

  Graph this data and manage this system at:
    https://landscape.canonical.com/

7 packages can be updated.  
7 updates are security updates.

Last login: Wed Feb  3 10:48:06 2016 from 192.168.1.180  
bjozsa@megatron:~$ sudo su -  
[sudo] password for bjozsa:
root@megatron:~# lxc-ls -f  
NAME                                          STATE    IPV4                                           IPV6  AUTOSTART  
-----------------------------------------------------------------------------------------------------------------------------------
aio1_aodh_container-72d3f185                  RUNNING  10.255.255.240, 172.29.238.111                 -     YES (onboot, openstack)  
aio1_ceilometer_api_container-328e928e        RUNNING  10.255.255.154, 172.29.239.6                   -     YES (onboot, openstack)  
aio1_ceilometer_collector_container-7007c54c  RUNNING  10.255.255.252, 172.29.237.136                 -     YES (onboot, openstack)  
aio1_cinder_api_container-501ec49f            RUNNING  10.255.255.215, 172.29.236.192, 172.29.246.87  -     YES (onboot, openstack)  
aio1_cinder_scheduler_container-e3abc1c0      RUNNING  10.255.255.248, 172.29.239.68                  -     YES (onboot, openstack)  
aio1_galera_container-34abdcf1                RUNNING  10.255.255.112, 172.29.239.130                 -     YES (onboot, openstack)  
aio1_galera_container-6cdcf3b0                RUNNING  10.255.255.121, 172.29.236.212                 -     YES (onboot, openstack)  
aio1_galera_container-c5482364                RUNNING  10.255.255.181, 172.29.237.242                 -     YES (onboot, openstack)  
aio1_glance_container-b038e088                RUNNING  10.255.255.15, 172.29.236.79, 172.29.245.107   -     YES (onboot, openstack)  
aio1_heat_apis_container-b2ae0207             RUNNING  10.255.255.245, 172.29.238.154                 -     YES (onboot, openstack)  
aio1_heat_engine_container-66b8dcd0           RUNNING  10.255.255.178, 172.29.237.64                  -     YES (onboot, openstack)  
aio1_horizon_container-41a63229               RUNNING  10.255.255.172, 172.29.237.139                 -     YES (onboot, openstack)  
aio1_horizon_container-84e57665               RUNNING  10.255.255.134, 172.29.237.102                 -     YES (onboot, openstack)  
aio1_keystone_container-3343a7c4              RUNNING  10.255.255.200, 172.29.237.65                  -     YES (onboot, openstack)  
aio1_keystone_container-f6d0fe97              RUNNING  10.255.255.142, 172.29.238.230                 -     YES (onboot, openstack)  
aio1_memcached_container-354ea762             RUNNING  10.255.255.177, 172.29.236.213                 -     YES (onboot, openstack)  
aio1_neutron_agents_container-9200183f        RUNNING  10.255.255.73, 172.29.237.208, 172.29.242.179  -     YES (onboot, openstack)  
aio1_neutron_server_container-b217eee3        RUNNING  10.255.255.30, 172.29.237.222                  -     YES (onboot, openstack)  
aio1_nova_api_metadata_container-5344e63a     RUNNING  10.255.255.161, 172.29.236.178                 -     YES (onboot, openstack)  
aio1_nova_api_os_compute_container-8b471ec2   RUNNING  10.255.255.80, 172.29.239.238                  -     YES (onboot, openstack)  
aio1_nova_cert_container-7a3b2fdc             RUNNING  10.255.255.126, 172.29.236.54                  -     YES (onboot, openstack)  
aio1_nova_conductor_container-6acd6a76        RUNNING  10.255.255.65, 172.29.239.80                   -     YES (onboot, openstack)  
aio1_nova_console_container-a8b545e4          RUNNING  10.255.255.251, 172.29.238.13                  -     YES (onboot, openstack)  
aio1_nova_scheduler_container-402c7f54        RUNNING  10.255.255.253, 172.29.237.74                  -     YES (onboot, openstack)  
aio1_rabbit_mq_container-80f2ac43             RUNNING  10.255.255.159, 172.29.239.200                 -     YES (onboot, openstack)  
aio1_rabbit_mq_container-8194fb70             RUNNING  10.255.255.4, 172.29.238.146                   -     YES (onboot, openstack)  
aio1_rabbit_mq_container-f749998a             RUNNING  10.255.255.36, 172.29.238.131                  -     YES (onboot, openstack)  
aio1_repo_container-27d433aa                  RUNNING  10.255.255.89, 172.29.237.156                  -     YES (onboot, openstack)  
aio1_repo_container-2d99ae62                  RUNNING  10.255.255.224, 172.29.238.71                  -     YES (onboot, openstack)  
aio1_rsyslog_container-1fa56f87               RUNNING  10.255.255.52, 172.29.236.243                  -     YES (onboot, openstack)  
aio1_swift_proxy_container-ff484b5c           RUNNING  10.255.255.11, 172.29.238.210, 172.29.247.147  -     YES (onboot, openstack)  
aio1_utility_container-18aff51d               RUNNING  10.255.255.186, 172.29.237.14                  -     YES (onboot, openstack)  
root@megatron:~# lxc-attach -n aio1_utility_container-18aff51d  
root@aio1_utility_container-18aff51d:~# source openstack/rc/openrc  
Please enter your OpenStack Password:  
root@aio1_utility_container-18aff51d:~# openstack flavor list  
+-----+----------------+-------+------+-----------+-------+-----------+
| ID  | Name           |   RAM | Disk | Ephemeral | VCPUs | Is Public |
+-----+----------------+-------+------+-----------+-------+-----------+
| 01  | J1.MIC.5M.10XG |   512 |   10 |         0 |     1 | True      |
| 02  | J1.MAC.1G.20XG |  1024 |   20 |         0 |     1 | True      |
| 03  | J1.SML.2G.40XG |  2048 |   40 |         0 |     1 | True      |
| 04  | J1.MED.4G.100G |  4096 |  100 |         0 |     1 | True      |
| 05  | J1.LRG.4G.125G |  4096 |  125 |         0 |     1 | True      |
| 06  | J1.XLG.8G.150G |  8192 |  150 |         0 |     1 | True      |
| 07  | J2.MIC.2G.20XG |  2048 |   20 |         0 |     1 | True      |
| 08  | J2.MAC.4G.40XG |  4096 |   40 |         0 |     1 | True      |
| 09  | J2.SML.8G.80XG |  8192 |   80 |         0 |     1 | True      |
| 1   | m1.tiny        |   512 |    1 |         0 |     1 | True      |
| 10  | J2.MED.16.100G | 16384 |  100 |         0 |     1 | True      |
| 11  | J2.LRG.32.160G | 32768 |  160 |         0 |     2 | True      |
| 12  | J2.XLG.32.250G | 32768 |  250 |         0 |     2 | True      |
| 13  | J3.MIC.5M.40XG |   512 |   40 |         0 |     1 | True      |
| 14  | J3.MAC.1G.80XG |  1024 |   80 |         0 |     1 | True      |
| 15  | J3.SML.2G.100G |  2048 |  100 |         0 |     1 | True      |
| 16  | J3.MED.4G.150G |  4048 |  150 |         0 |     1 | True      |
| 17  | J3.LRG.8G.200G |  8192 |  200 |         0 |     1 | True      |
| 18  | J3.XLG.16.150G | 16384 |  150 |         0 |     1 | True      |
| 19  | J4.MIC.1G.10XG |  1024 |   10 |         0 |     1 | True      |
| 2   | m1.small       |  2048 |   20 |         0 |     1 | True      |
| 20  | J4.MAC.2G.20XG |  2048 |   20 |         0 |     1 | True      |
| 201 | tempest1       |   256 |    1 |         0 |     1 | True      |
| 202 | tempest2       |   512 |    1 |         0 |     1 | True      |
| 21  | J4.SML.4G.20XG |  4096 |   20 |         0 |     1 | True      |
| 22  | J4.MED.8G.40XG |  8192 |   40 |         0 |     2 | True      |
| 23  | J4.LRG.16.40XG | 16384 |   40 |         0 |     2 | True      |
| 24  | J4.XLG.32.40XG | 32768 |   40 |         0 |     4 | True      |
| 3   | m1.medium      |  4096 |   40 |         0 |     2 | True      |
| 4   | m1.large       |  8192 |   80 |         0 |     4 | True      |
| 5   | m1.xlarge      | 16384 |  160 |         0 |     8 | True      |
+-----+----------------+-------+------+-----------+-------+-----------+
root@aio1_utility_container-18aff51d:~#  

As you can see, everything is running in containers!

The Ansible Groups and Container Names

Now we really want to look at the next most important thing for you to understand; the Ansible Groups. This is really important when you want to run tasks against your environment using Ansible (rather than doing things manually). If you want to use the containers manually, you can still do this! It’s your time to waste, not mine. If automation of tasks is desirable to you, then this is something you’ll want to understand better! Luckily, this project makes things so incredibly easy for you. Just navigate to the /opt/openstack-ansible/directory, and run the following script ./scripts/inventory-manage.py -G. An example of this is shown below.

    root@megatron:/opt/openstack-ansible# ./scripts/inventory-manage.py -G
     +--------------------------------+----------------------------------------------+
     | groups                         | container_name                               |
     +--------------------------------+----------------------------------------------+
     | aodh_container                 | aio1_aodh_container-38a780b7                 |
     | ceilometer_collector_container | aio1_ceilometer_collector_container-ed5bb27a |
     | utility_container              | aio1_utility_container-7b75ef4b              |
     | cinder_scheduler_container     | aio1_cinder_scheduler_container-69a98939     |
     | rsyslog                        | aio1_rsyslog_container-66ae2861              |
     | swift_proxy_container          | aio1_swift_proxy_container-c86ae522          |
     | nova_api_metadata              | aio1_nova_api_metadata_container-401c7599    |
     | neutron_server_container       | aio1_neutron_server_container-1ee5a4fd       |
     | nova_api_os_compute            | aio1_nova_api_os_compute_container-66728bd4  |
     | nova_cert                      | aio1_nova_cert_container-aabe52f6            |
     | pkg_repo                       | aio1_repo_container-4fc3fb96                 |
     |                                | aio1_repo_container-0ad31d6b                 |
     | neutron_agents_container       | aio1_neutron_agents_container-f16cc94c       |
     | nova_api_os_compute_container  | aio1_nova_api_os_compute_container-66728bd4  |
     | shared-infra_all               | aio1                                         |
     |                                | aio1_utility_container-7b75ef4b              |
     |                                | aio1_memcached_container-1fc8e6b0            |
     |                                | aio1_galera_container-382c8874               |
     |                                | aio1_galera_container-f93d66b1               |
     |                                | aio1_galera_container-397db625               |
     |                                | aio1_rabbit_mq_container-59b0ebdb            |
     |                                | aio1_rabbit_mq_container-a5ca3d38            |
     |                                | aio1_rabbit_mq_container-7a901a26            |
     | ceilometer_api_container       | aio1_ceilometer_api_container-658df495       |
     | nova_console_container         | aio1_nova_console_container-ffec93bd         |
     | aio1_containers                | aio1_nova_conductor_container-97d030c5       |
     |                                | aio1_aodh_container-38a780b7                 |
     |                                | aio1_ceilometer_collector_container-ed5bb27a |
     |                                | aio1_horizon_container-9472f844              |
     |                                | aio1_horizon_container-73488867              |
     |                                | aio1_utility_container-7b75ef4b              |
     |                                | aio1_keystone_container-a64a3cd3             |
     |                                | aio1_keystone_container-43955d1a             |
     |                                | aio1_cinder_scheduler_container-69a98939     |
     |                                | aio1_nova_cert_container-aabe52f6            |
     |                                | aio1_swift_proxy_container-c86ae522          |
     |                                | aio1_neutron_server_container-1ee5a4fd       |
     |                                | aio1_repo_container-4fc3fb96                 |
     |                                | aio1_repo_container-0ad31d6b                 |
     |                                | aio1_glance_container-da1bd1a8               |
     |                                | aio1_neutron_agents_container-f16cc94c       |
     |                                | aio1_nova_api_os_compute_container-66728bd4  |
     |                                | aio1_ceilometer_api_container-658df495       |
     |                                | aio1_nova_api_metadata_container-401c7599    |
     |                                | aio1_memcached_container-1fc8e6b0            |
     |                                | aio1_cinder_api_container-d397a5b0           |
     |                                | aio1_galera_container-382c8874               |
     |                                | aio1_galera_container-397db625               |
     |                                | aio1_galera_container-f93d66b1               |
     |                                | aio1_nova_scheduler_container-b9885d7e       |
     |                                | aio1_rsyslog_container-66ae2861              |
     |                                | aio1_rabbit_mq_container-59b0ebdb            |
     |                                | aio1_rabbit_mq_container-7a901a26            |
     |                                | aio1_rabbit_mq_container-a5ca3d38            |
     |                                | aio1_nova_console_container-ffec93bd         |
     |                                | aio1_heat_apis_container-cb9c8304            |
     |                                | aio1_heat_engine_container-4145b1be          |
     | neutron_server                 | aio1_neutron_server_container-1ee5a4fd       |
     | swift-proxy_all                | aio1                                         |
     |                                | aio1_swift_proxy_container-c86ae522          |
     | rabbitmq                       | aio1_rabbit_mq_container-59b0ebdb            |
     |                                | aio1_rabbit_mq_container-a5ca3d38            |
     |                                | aio1_rabbit_mq_container-7a901a26            |
     | heat_api_cfn                   | aio1_heat_apis_container-cb9c8304            |
     | nova_scheduler_container       | aio1_nova_scheduler_container-b9885d7e       |
     | cinder_api                     | aio1_cinder_api_container-d397a5b0           |
     | metering-alarm_all             | aio1_aodh_container-38a780b7                 |
     |                                | aio1                                         |
     | neutron_metadata_agent         | aio1_neutron_agents_container-f16cc94c       |
     | keystone                       | aio1_keystone_container-a64a3cd3             |
     |                                | aio1_keystone_container-43955d1a             |
     | nova_api_metadata_container    | aio1_nova_api_metadata_container-401c7599    |
     | ceilometer_agent_notification  | aio1_ceilometer_api_container-658df495       |
     | memcached                      | aio1_memcached_container-1fc8e6b0            |
     | nova_conductor_container       | aio1_nova_conductor_container-97d030c5       |
     | aodh_api                       | aio1_aodh_container-38a780b7                 |
     | nova_conductor                 | aio1_nova_conductor_container-97d030c5       |
     | neutron_metering_agent         | aio1_neutron_agents_container-f16cc94c       |
     | horizon                        | aio1_horizon_container-73488867              |
     |                                | aio1_horizon_container-9472f844              |
     | os-infra_all                   | aio1_nova_conductor_container-97d030c5       |
     |                                | aio1                                         |
     |                                | aio1_horizon_container-73488867              |
     |                                | aio1_horizon_container-9472f844              |
     |                                | aio1_nova_cert_container-aabe52f6            |
     |                                | aio1_glance_container-da1bd1a8               |
     |                                | aio1_nova_api_os_compute_container-66728bd4  |
     |                                | aio1_nova_api_metadata_container-401c7599    |
     |                                | aio1_nova_scheduler_container-b9885d7e       |
     |                                | aio1_nova_console_container-ffec93bd         |
     |                                | aio1_heat_apis_container-cb9c8304            |
     |                                | aio1_heat_engine_container-4145b1be          |
     | repo_container                 | aio1_repo_container-4fc3fb96                 |
     |                                | aio1_repo_container-0ad31d6b                 |
     | identity_all                   | aio1                                         |
     |                                | aio1_keystone_container-a64a3cd3             |
     |                                | aio1_keystone_container-43955d1a             |
     | keystone_container             | aio1_keystone_container-a64a3cd3             |
     |                                | aio1_keystone_container-43955d1a             |
     | swift_proxy                    | aio1_swift_proxy_container-c86ae522          |
     | nova_cert_container            | aio1_nova_cert_container-aabe52f6            |
     | nova_console                   | aio1_nova_console_container-ffec93bd         |
     | aodh_alarm_notifier            | aio1_aodh_container-38a780b7                 |
     | utility                        | aio1_utility_container-7b75ef4b              |
     | glance_container               | aio1_glance_container-da1bd1a8               |
     | log_all                        | aio1_rsyslog_container-66ae2861              |
     |                                | aio1                                         |
     | memcached_container            | aio1_memcached_container-1fc8e6b0            |
     | cinder_api_container           | aio1_cinder_api_container-d397a5b0           |
     | aodh_alarm_evaluator           | aio1_aodh_container-38a780b7                 |
     | neutron_l3_agent               | aio1_neutron_agents_container-f16cc94c       |
     | ceilometer_collector           | aio1_ceilometer_collector_container-ed5bb27a |
     | rabbit_mq_container            | aio1_rabbit_mq_container-59b0ebdb            |
     |                                | aio1_rabbit_mq_container-7a901a26            |
     |                                | aio1_rabbit_mq_container-a5ca3d38            |
     | heat_api_cloudwatch            | aio1_heat_apis_container-cb9c8304            |
     | aodh_listener                  | aio1_aodh_container-38a780b7                 |
     | metering-infra_all             | aio1_ceilometer_collector_container-ed5bb27a |
     |                                | aio1                                         |
     |                                | aio1_ceilometer_api_container-658df495       |
     | heat_engine_container          | aio1_heat_engine_container-4145b1be          |
     | storage-infra_all              | aio1                                         |
     |                                | aio1_cinder_scheduler_container-69a98939     |
     |                                | aio1_cinder_api_container-d397a5b0           |
     | galera                         | aio1_galera_container-382c8874               |
     |                                | aio1_galera_container-f93d66b1               |
     |                                | aio1_galera_container-397db625               |
     | horizon_container              | aio1_horizon_container-9472f844              |
     |                                | aio1_horizon_container-73488867              |
     | neutron_agent                  | aio1_neutron_agents_container-f16cc94c       |
     | neutron_lbaas_agent            | aio1_neutron_agents_container-f16cc94c       |
     | heat_api                       | aio1_heat_apis_container-cb9c8304            |
     | glance_registry                | aio1_glance_container-da1bd1a8               |
     | ceilometer_agent_central       | aio1_ceilometer_api_container-658df495       |
     | galera_container               | aio1_galera_container-382c8874               |
     |                                | aio1_galera_container-397db625               |
     |                                | aio1_galera_container-f93d66b1               |
     | network_all                    | aio1                                         |
     |                                | aio1_neutron_server_container-1ee5a4fd       |
     |                                | aio1_neutron_agents_container-f16cc94c       |
     | glance_api                     | aio1_glance_container-da1bd1a8               |
     | neutron_dhcp_agent             | aio1_neutron_agents_container-f16cc94c       |
     | repo-infra_all                 | aio1_repo_container-4fc3fb96                 |
     |                                | aio1                                         |
     |                                | aio1_repo_container-0ad31d6b                 |
     | neutron_linuxbridge_agent      | aio1_neutron_agents_container-f16cc94c       |
     |                                | aio1                                         |
     | heat_engine                    | aio1_heat_engine_container-4145b1be          |
     | cinder_scheduler               | aio1_cinder_scheduler_container-69a98939     |
     | nova_scheduler                 | aio1_nova_scheduler_container-b9885d7e       |
     | ceilometer_api                 | aio1_ceilometer_api_container-658df495       |
     | rsyslog_container              | aio1_rsyslog_container-66ae2861              |
     | heat_apis_container            | aio1_heat_apis_container-cb9c8304            |
     +--------------------------------+----------------------------------------------+
     root@megatron:/opt/openstack-ansible#

You can see, there are a lot of groups! I create a custom group for my own uses, and I’ll explain this better in the Operations section below. For now, I want to tell you more about Openstack services.

Openstack Services

Openstack has a lot of services to keep up with, and adding a lot of containers, groups and other management responsibility may not see to help a whole lot. I can assure you that this has been made easy too.

What we’re going to do, is cat and grep out a file, to figure out what services may be running on a particular Openstack node-type.

First, what we’re going to do is navigate to the same /opt/openstack-ansibledirectory we use all the time (you should start seeing a pattern here). Next, we want to list out the contents of the directory /opt/openstack-ansible/playbooks/roles/, and grep for anything containing os_.

    root@megatron:/opt/openstack-ansible# ls -las playbooks/roles/ | grep os_
     4 drwxr-xr-x  7 root root 4096 Feb  6 20:34 os_aodh
     4 drwxr-xr-x  8 root root 4096 Feb  6 20:34 os_ceilometer
     4 drwxr-xr-x  8 root root 4096 Feb  6 20:34 os_cinder
     4 drwxr-xr-x  7 root root 4096 Feb  6 20:34 os_glance
     4 drwxr-xr-x  7 root root 4096 Feb  6 20:34 os_heat
     4 drwxr-xr-x  7 root root 4096 Feb  6 20:34 os_horizon
     4 drwxr-xr-x  9 root root 4096 Feb  6 20:34 os_keystone
     4 drwxr-xr-x  9 root root 4096 Feb  6 20:34 os_neutron
     4 drwxr-xr-x  8 root root 4096 Feb  6 20:34 os_nova
     4 drwxr-xr-x  7 root root 4096 Feb  6 20:34 os_swift
     4 drwxr-xr-x  6 root root 4096 Feb  6 20:34 os_swift_sync
     4 drwxr-xr-x  6 root root 4096 Feb  6 20:34 os_tempest
     root@megatron:/opt/openstack-ansible#

Great! Now we have our Openstack node-types. Next, we want to grep the contents of /opt/openstack-ansible/playbooks/roles/os_<nodetype>/defaults/main.yml, like shown below.

    root@megatron:/opt/openstack-ansible# cat playbooks/roles/os_nova/defaults/main.yml | grep ": nova-"
     nova_program_name: nova-api-os-compute
     nova_spice_program_name: nova-spicehtml5proxy
     nova_novncproxy_program_name: nova-novncproxy
     nova_metadata_program_name: nova-api-metadata
     nova_cert_program_name: nova-cert
     nova_compute_program_name: nova-compute
     nova_conductor_program_name: nova-conductor
     nova_consoleauth_program_name: nova-consoleauth
     nova_scheduler_program_name: nova-scheduler

root@megatron:/opt/openstack-ansible#

Notice the format closely: cat playbooks/roles/os_<nodetype>/defaults/main.yml | grep ": <nodetype>-" even down to the hyphen, because that’s really important.So what we’re concerned with are the following nova services.

    nova-api-os-compute
    nova-spicehtml5proxy
    nova-novncproxy
    nova-api-metadata
    nova-cert
    nova-compute
    nova-conductor
    nova-consoleauth
    nova-scheduler

So if you ever need to restart <nodetype>-<service>, you can connect to the aio1_<nodetype>_<service> and perform a service <nodetype>-<service> start|stop|restart. Better yet, we can do this the Ansible way, which [again] is listed below.

Upgrading the Environment

When you’re ready to upgrade the environment (to the latest minor versions), perform the following steps.

First, you will want/need to update/synchronize your local repositories with any changes upstream. Make sure to do this in the /opt/openstack-ansible/directory.

    root@megatron:/opt/openstack-ansible# git fetch --all
    Fetching origin
    remote: Counting objects: 421, done.
    remote: Total 421 (delta 287), reused 288 (delta 287), pack-reused 133
    Receiving objects: 100% (421/421), 57.43 KiB | 0 bytes/s, done.
    Resolving deltas: 100% (300/300), completed with 133 local objects.
    From https://github.com/openstack/openstack-ansible
     dca9d86..67ddf87 liberty -> origin/liberty
     4c8bba8..3770bb5 kilo -> origin/kilo
     cb007b0..191f4c3 master -> origin/master
     * [new tag] 11.2.9 -> 11.2.9
     * [new tag] 12.0.6 -> 12.0.6
    root@megatron:/opt/openstack-ansible#

After this has completed, you’ll see that two branches were updated (in this case Kilo = 11.2.9 and Liberty 12.0.6). What we will need to do is ‘check out’ the updated branch. NOTE: Updates can still exist within the same TAG, and you will know this when you see a {{* [new tag]}} indicator.

     root@megatron:/opt/openstack-ansible# git checkout 12.0.6
     Note: checking out '12.0.6'.
     You are in 'detached HEAD' state. You can look around, make experimental
     changes and commit them, and you can discard any commits you make in this
     state without impacting any branches by performing another checkout.
     If you want to create a new branch to retain commits you create, you may
     do so (now or later) by using -b with the checkout command again. Example:
     git checkout -b new_branch_name
     HEAD is now at 972b41a... Merge "Update Defcore test list function" into liberty
     root@megatron:/opt/openstack-ansible#

Next, update RabbitMQ:

     root@megatron:/opt/openstack-ansible# openstack-ansible -e rabbitmq_upgrade=true \
       rabbitmq-install.yml

Next, update the Utility Container:

     root@megatron:/opt/openstack-ansible# openstack-ansible -e pip_install_options="--force-reinstall" \
       utility-install.yml

Finally, update all of the Openstack Services:

     root@megatron:/opt/openstack-ansible# openstack-ansible -e pip_install_options="--force-reinstall" \
        setup-openstack.yml

Make sure to check all of your services when you are done, to ensure that everything is running.

Operations and Advanced Topics

Adding Additional Ansible Roles to Openstack-Anisble

Adding additional Ansible roles is one of the best features about the Openstack-Ansible project! This is how to add/integrate Contrail/Opencontrail, Ironic, and other useful projects into the deployment. This can be done either at initial run, or by simply rerunning the Ansible build playbooks after your infrastructure is already initiated; either way is perfectly fine. To read more, follow the link to the documentation entitled: Extending Openstack-Anisble.

Instance Evacuation

Instances need to be available at all times. What if hardware issues start to arise. This is call “Instance Evacuation” and it is documented here: Openstack Instance Evacuation.

Live Migration

Live Migration is a feature similar to VMWare’s vMotion. It allows you to actively transfer an instance from one compute node to another with zero downtime. Configuration changes are required, as I found that Mirantis disables these features in their Kilo release (NOTE: I need verify this wasn’t an installation error). See below for further details.
Openstack Live Migration.

Mirantis:

    root@node-titanic-88:~# cat /etc/nova/nova.conf | grep live
    #live_migration_retry_count=30
    #live_migration_uri=qemu+tcp://%s/system
    #live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,
    VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED
    #live_migration_bandwidth=0
    root@node-titanic-88:~#

Openstack-Ansible:

    root@aio1_nova_api_os_compute_container-66728bd4:~# cat /etc/nova/nova.conf | grep live
     live_migration_flag = "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_TUNNELLED"
     root@aio1_nova_api_os_compute_container-66728bd4:~#

RDO-Openstack:

    [root@galvatron ~]# cat /etc/nova/nova.conf | grep live
    live_migration_uri=qemu+tcp://nova@%s/system
    live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED
    #live_migration_bandwidth=0
    #disable_libvirt_livesnapshot=true
    [root@galvatron ~]#

I’ll be back later to document more. For now, this is a pretty solid start though! Please email me if you have any questions or comments!

 

Openstack – Images format conversion

Summary :

How to convert image formats to upload in glance

Converting images from one format to another is generally straightforward.

qemu-img convert: raw, qcow2, qed, vdi, vmdk, vhd

The qemu-img convert command can do conversion between multiple formats, including qcow2, qed, raw, vdi, vhd, and vmdk.

qemu-img format strings
Image format Argument to qemu-img
QCOW2 (KVM, Xen) qcow2
QED (KVM) qed
raw raw
VDI (VirtualBox) vdi
VHD (Hyper-V) vpc
VMDK (VMware) vmdk

This example will convert a raw image file named image.img to a qcow2 image file.

$ qemu-img convert -f raw -O qcow2 image.img image.qcow2

Run the following command to convert a vmdk image file to a raw image file.

$ qemu-img convert -f vmdk -O raw image.vmdk image.img

Run the following command to convert a vmdk image file to a qcow2 image file.

$ qemu-img convert -f vmdk -O qcow2 image.vmdk image.qcow2

Note

The -f format flag is optional. If omitted, qemu-img will try to infer the image format.

When converting an image file with Windows, ensure the virtio driver is installed. Otherwise, you will get a blue screen when launching the image due to lack of the virtio driver. Another option is to set the image properties as below when you update the image in glance to avoid this issue, but it will reduce performance significantly.

$ glance image-update --property hw_disk_bus='ide' image_id

VBoxManage: VDI (VirtualBox) to raw

If you’ve created a VDI image using VirtualBox, you can convert it to raw format using the VBoxManage command-line tool that ships with VirtualBox. On Mac OS X, and Linux, VirtualBox stores images by default in the ~/VirtualBox VMs/ directory. The following example creates a raw image in the current directory from a VirtualBox VDI image.

$ VBoxManage clonehd ~/VirtualBox\ VMs/image.vdi image.img --format raw

Openstack – glance tool to create auto image

Summary

Tools to create openstack images in glance

Tool support for image creation

There are several tools that are designed to automate image creation.

Diskimage-builder

Diskimage-builder is an automated disk image creation tool that supports a variety of distributions and architectures. Diskimage-builder (DIB) can build images for Fedora, Red Hat Enterprise Linux, Ubuntu, Debian, CentOS, and openSUSE. DIB is organized in a series of elements that build on top of each other to create specific images.

To build an image, call the following script:

# disk-image-create ubuntu vm

This example creates a generic, bootable Ubuntu image of the latest release.

Further customization could be accomplished by setting environment variables or adding elements to the command-line:

# disk-image-create -a armhf ubuntu vm

This example creates the image as before, but for arm architecture. More elements are available in the git source directory and documented in the diskimage-builder elements documentation.

Oz

Oz is a command-line tool that automates the process of creating a virtual machine image file. Oz is a Python app that interacts with KVM to step through the process of installing a virtual machine. It uses a predefined set of kickstart (Red Hat-based systems) and preseed files (Debian-based systems) for operating systems that it supports, and it can also be used to create Microsoft Windows images. On Fedora, install Oz with yum:

# yum install oz

Note

As of this writing, there are no Oz packages for Ubuntu, so you will need to either install from the source or build your own .deb file.

A full treatment of Oz is beyond the scope of this document, but we will provide an example. You can find additional examples of Oz template files on GitHub at rackerjoe/oz-image-build/templates. Here’s how you would create a CentOS 6.4 image with Oz.

Create a template file (we’ll call it centos64.tdl) with the following contents. The only entry you will need to change is the<rootpw> contents.

<template>
  <name>centos64</name>
  <os>
    <name>CentOS-6</name>
    <version>4</version>
    <arch>x86_64</arch>
    <install type='iso'>
      <iso>http://mirror.rackspace.com/CentOS/6/isos/x86_64/CentOS-6.4-x86_64-bin-DVD1.iso</iso>
    </install>
    <rootpw>CHANGE THIS TO YOUR ROOT PASSWORD</rootpw>
  </os>
  <description>CentOS 6.4 x86_64</description>
  <repositories>
    <repository name='epel-6'>
      <url>http://download.fedoraproject.org/pub/epel/6/$basearch</url>
      <signed>no</signed>
    </repository>
  </repositories>
  <packages>
    <package name='epel-release'/>
    <package name='cloud-utils'/>
    <package name='cloud-init'/>
  </packages>
  <commands>
    <command name='update'>
yum -y update
yum clean all
sed -i '/^HWADDR/d' /etc/sysconfig/network-scripts/ifcfg-eth0
echo -n > /etc/udev/rules.d/70-persistent-net.rules
echo -n > /lib/udev/rules.d/75-persistent-net-generator.rules
    </command>
  </commands>
</template>

This Oz template specifies where to download the Centos 6.4 install ISO. Oz will use the version information to identify which kickstart file to use. In this case, it will be RHEL6.auto. It adds EPEL as a repository and install the epel-release, cloud-utils, andcloud-init packages, as specified in the packages section of the file.

After Oz completes the initial OS install using the kickstart file, it customizes the image with an update. It also removes any reference to the eth0 device that libvirt creates while Oz does the customizing, as specified in the command section of the XML file.

To run this:

# oz-install -d3 -u centos64.tdl -x centos64-libvirt.xml
  • The -d3 flag tells Oz to show status information as it runs.
  • The -u tells Oz to do the customization (install extra packages, run the commands) once it does the initial install.
  • The -x flag tells Oz what filename to use to write out a libvirt XML file (otherwise it will default to something likecentos64Apr_03_2013-12:39:42).

If you leave out the -u flag, or you want to edit the file to do additional customizations, you can use the oz-customize command, using the libvirt XML file that oz-install creates. For example:

# oz-customize -d3 centos64.tdl centos64-libvirt.xml

Oz will invoke libvirt to boot the image inside of KVM, then Oz will ssh into the instance and perform the customizations.

VMBuilder

VMBuilder (Virtual Machine Builder) is a command-line tool that creates virtual machine images for different hypervisors. The version of VMBuilder that ships with Ubuntu can only create Ubuntu virtual machine guests. The version of VMBuilder that ships with Debian can create Ubuntu and Debian virtual machine guests.

The Ubuntu Server Guide has documentation on how to use VMBuilder to create an Ubuntu image.

VeeWee

VeeWee is often used to build Vagrant boxes, but it can also be used to build the KVM images.

Packer

Packer is a tool for creating machine images for multiple platforms from a single source configuration.

imagefactory

imagefactory is a newer tool designed to automate the building, converting, and uploading images to different cloud providers. It uses Oz as its back-end and includes support for OpenStack-based clouds.

KIWI

The KIWI OS image builder provides an operating system image builder for various Linux supported hardware platforms as well as for virtualization and cloud systems. It allows building of images based on openSUSE, SUSE Linux Enterprise, and Red Hat Enterprise Linux. The openSUSE Documentation explains how to use KIWI.

SUSE Studio

SUSE Studio is a web application for building and testing software applications in a web browser. It supports the creation of physical, virtual or cloud-based applications and includes support for building images for OpenStack based clouds using SUSE Linux Enterprise and openSUSE as distributions.

virt-builder

Virt-builder is a tool for quickly building new virtual machines. You can build a variety of VMs for local or cloud use, usually within a few minutes or less. Virt-builder also has many ways to customize these VMs. Everything is run from the command line and nothing requires root privileges, so automation and scripting is simple.

To build an image, call the following script:

# virt-builder fedora-23 -o image.qcow2 --format qcow2 \
  --update --selinux-relabel --size 20G

To list the operating systems available to install:

$ virt-builder --list

To import it into libvirt with virsh:

# virt-install --name fedora --ram 2048 \
  --disk path=image.qcow2,format=qcow2 --import

Openstack- create windows image

Summary

Creates a Windows Server 2012 qcow2 image, using the virt-install command and the KVM hypervisor.

  1. Follow these steps to prepare the installation:

    1. Download a Windows Server 2012 installation ISO. Evaluation images are available on the Microsoft website(registration required).

    2. Download the signed VirtIO drivers ISO from the Fedora website.

    3. Create a 15 GB qcow2 image:

      $ qemu-img create -f qcow2 ws2012.qcow2 15G
      
  2. Start the Windows Server 2012 installation with the virt-install command:

    # virt-install --connect qemu:///system \
      --name ws2012 --ram 2048 --vcpus 2 \
      --network network=default,model=virtio \
      --disk path=ws2012.qcow2,format=qcow2,device=disk,bus=virtio \
      --cdrom /path/to/en_windows_server_2012_x64_dvd.iso \
      --disk path=/path/to/virtio-win-0.1-XX.iso,device=cdrom \
      --vnc --os-type windows --os-variant win2k8
    

    Use virt-manager or virt-viewer to connect to the VM and start the Windows installation.

  3. Enable the VirtIO drivers.

    The disk is not detected by default by the Windows installer. When requested to choose an installation target, click Load driver and browse the file system to select the E:\WIN8\AMD64 folder. The Windows installer displays a list of drivers to install. Select the VirtIO SCSI and network drivers and continue the installation.

    Once the installation is completed, the VM restarts. Define a password for the administrator when prompted.

  4. Log in as administrator and start a command window.

  5. Complete the VirtIO drivers installation by running the following command:

    C:\pnputil -i -a E:\WIN8\AMD64\*.INF
    
  6. To allow the Cloudbase-Init to run scripts during an instance boot, set the PowerShell execution policy to be unrestricted:

    C:\powershell
    C:\Set-ExecutionPolicy Unrestricted
    
  7. Download and install the Cloudbase-Init:

    C:\Invoke-WebRequest -UseBasicParsing http://www.cloudbase.it/downloads/CloudbaseInitSetup_Stable_x64.msi -OutFile cloudbaseinit.msi
    C:\.\cloudbaseinit.msi
    

    In the configuration options window, change the following settings:

    • Username: Administrator
    • Network adapter to configure: Red Hat VirtIO Ethernet Adapter
    • Serial port for logging: COM1

    When the installation is done, in the Complete the Cloudbase-Init Setup Wizard window, select the Run Sysprep andShutdown check boxes and click Finish.

    Wait for the machine shutdown.

Your image is ready to upload to the Image service:

$ glance image-create --name WS2012 --disk-format qcow2 \
  --container-format bare --file ws2012.qcow2