PowerShell Remoting allows you to run individual PowerShell commands or access full PowerShell sessions on remote Windows systems. It’s similar to SSH for accessing remote terminals on other operating systems.

PowerShell is locked-down by default, so you’ll have to enable PowerShell Remoting before using it. This setup process is a bit more complex if you’re using a workgroup – for example, on a home network — instead of a domain.

Enabling PowerShell Remoting

On the computer you want to access remotely, open a PowerShell window as Administrator – right click the PowerShell shortcut and select Run as Administrator.

To enable PowerShell Remoting, run the following command (known as a cmdlet in PowerShell):

Enable-PSRemoting -Force

This command starts the WinRM service, sets it to start automatically with your system, and creates a firewall rule that allows incoming connections. The -Force part of the command tells PowerShell to perform these actions without prompting you for each step.


Workgroup Setup

If your computers aren’t on a domain – say, if you’re doing this on a home network – you’ll need to perform a few more steps. First, run the Enable-PSRemoting -Force command on the computer you want to connect from, as well. (Remember to launch PowerShell as Administrator before running this command.)

On both computers, configure the TrustedHosts setting so the computers will trust each other. For example, if you’re doing this on a trusted home network, you can use this command to allow any computer to connect:

Set-Item wsman:\localhost\client\trustedhosts *

To restrict computers that can connect, you could also replace the * with a comma-separated list of IP addresses or computer names.

On both computers, restart the WinRM service so your new settings will take effect:

Restart-Service WinRM


Testing the Connection

On the computer you want to access the remote system from, use the Test-WsMan cmdlet to test your configuration. This command tests whether the WinRM service is running on the remote computer – if it completes successfully, you’ll know that WinRM is enabled and the computers can communicate with each other. Use the following cmdlet, replacing COMPUTER with the name of your remote computer:


If the command completes successfully, you’ll see information about the remote computer’s WinRM service in the window. If the command fails, you’ll see an error message instead.


Executing a Remote Command

To run a command on the remote system, use the Invoke-Command cmdlet. The syntax of the command is as follows:

Invoke-Command -ComputerName COMPUTER -ScriptBlock { COMMAND } -credential USERNAME

COMPUTER represents the computer’s name, COMMAND is the command you want to run, and USERNAME is the username you want to run the command as on the remote computer. You’ll be prompted to enter a password for the username.

For example, to view the contents of the C:\ directory on a remote computer named Monolith as the user Chris, we could use the following command:

Invoke-Command -ComputerName Monolith -ScriptBlock { Get-ChildItem C:\ } -credential chris


Starting a Remote Session

Use the Enter-PSSession cmdlet to start a remote PowerShell session, where you can run multiple commands, instead of running a single command:

Enter-PSSession -ComputerName COMPUTER -Credential USER


Winexe – Run Windows commands in Linux to admin


winexe remotely executes commands on Windows NT/2000/XP/2003 systems from GNU/Linux .



$ sudo aptitude install build-essential autoconf checkinstall \
 python python-all python-dev python-all-dev python-setuptools libdcerpc-dev

Installation of winexe

$ cd ~/src/
$ wget http://downloads.sourceforge.net/project/winexe/winexe-1.00.tar.gz
$ tar xzvf winexe-1.00.tar.gz
$ cd winexe-1.00/source4/
$ ./autogen.sh
$ ./configure
$ make basics bin/winexe
$ ./bin/winexe -V
Version 4.0.0alpha11-GIT-UNKNOWN



Usage: winexe //host command


Common options

Uninstall winexe service after remote execution
Reinstall winexe service before remote execution
Use SYSTEM account
Run as user (BEWARE: password is sent in cleartext over net)
Run as user options defined in a file
Desktop interaction: 0 – disallow, 1 – allow. If you allow use also –system switch (Win requirement). Vista do not support this option.
OS type: 0 – 32bit, 1 – 64bit, 2 – winexe will decide. Determines which version (32bit/64bit) of service will be installed.

Help and version options

-?, –help
Show this help message
Display brief usage message
-V, –version
Print version

Common samba options

-d, –debuglevel=DEBUGLEVEL
Set debug level
Send debug output to STDERR
-s, –configfile=CONFIGFILE
Use alternative configuration file
Set smb.conf option from command line
-l, –log-basename=LOGFILEBASE
Basename for log/debug files
enable talloc leak reporting on exit
enable full talloc leak reporting on exit

Connection options

-R, –name-resolve=NAME-RESOLVE-ORDER
Use these name resolution services only
-O, –socket-options=SOCKETOPTIONS
socket options to use
-n, –netbiosname=NETBIOSNAME
Primary netbios name
-S, –signing=on|off|required
Set the client signing state
-W, –workgroup=WORKGROUP
Set the workgroup name
Set the realm name
-i, –scope=SCOPE
Use this Netbios scope
-m, –maxprotocol=MAXPROTOCOL
Set max protocol level

Authentication options

Set the network username
-N, –no-pass
Don’t ask for a password
-A, –authentication-file=FILE
Get the credentials from a file
-P, –machine-pass
Use stored machine account password (implies -k)
DN to use for a simple bind
-k, –kerberos=STRING
Use Kerberos

NTP -Troubleshooting for Windows Time Service (w32time) Synchronization

Hi! You have landed here most probably while searching for solution to your NTP synchronization problem, right? For your comfort, this page is excellent to start with. We’ve provided a list of the most common causes of NTP time sync troubles. Check which one applies yours and follow the proposed steps to make time service successful. We hope you will find these guidelines helpful!

Firewall Port Opening :

NTP Port is UDP 123

Control Panel -> Windows Firewall->Advanced settings. If the firewall is on, one has to enable Inbound and Outbound Rules for “Specific local ports” in our case UDP, port 123.

NTP From Cmdlet

  • To make any w32time changes in command line window one has to run cmd program as administrator
  • enter the following commands
w32tm /config /manualpeerlist:[server],0x8 /syncfromflags:MANUAL
The actual IP address of the NTP server or its host name must be entered instead of [server].
The flag “0x8” forces w32time not to send “symmetric active” packets but normal “client” requests which the NTP server replies to as usual.Then the following command can be used to immediately make the changes effective:

w32tm /config /update
If this command has completed successfully your system clock has synchronized to the given NTP server. To check it open a Date and Time window (click “time” icon in the lower right corner of the desktop) -> Change date and time settings -> Internet Time. You should see something similar to Figure 6.

Alternatively, the w32time service can be restarted:

net stop w32time
net start w32time

The command:

net time /querysntp
can be used to check the configuration. The output should look similar to the line below:

The current SNTP value is:[server],0x8

If the w32time service is restarted it sends immediately a request to the NTP server. Additionally, the command:

w32tm /resync
can be used to let w32time send a request.

Check if w32time service is running:

Open Run > services.msc>windows time, its started. If want change manual/automatic as per requirement.

Open Source Puppet – Quick Start


This post aims to be your quickest guide to get started with Puppet. We’ll be using the open source version of Puppet. An hour of spare time and two Ubuntu machines (physical or virtual doesn’t matter) is all that is needed.

Quick Introduction

Lets say you want to install and run apache server on one of the machines in your lab. On another, you want to create a new user. On a third machine, you want to install MySQL, and allow access to this machine only from the first server. Seems like a lot of manual work isn’t it? The power of Puppet is, you can specify all these tasks in a file, called ‘Puppet manifest’, and then execute it. Everything will be set up for you just as you wanted! Now what makes this ‘I care about the end result, not the process’ approach really powerful is you can ‘apply’ this manifest over and over again to get the same end result. You can easily modify this manifest file, extend it, and manage it under version control, just like you would with a piece of software. Welcome to the world of IT automation 🙂

Although the syntax of a Puppet manifest is Ruby-ish, no knowledge of Ruby is required at all (I don’t know Ruby).

There is a whole lot of things you can do with Puppet. Here, we’ll just get us started with it. Once you are through this post, you can head over to Puppet Labs’ documents and tutorials, for more on “how”s and “why”s of Puppet.


You just require two Ubuntu machines connected to each other. One will be the Puppet ‘master’ node (the machine which will take care of managing the configuration and state of all the machines in our deployment), the other one ‘slave’ (which unfortunately is the only actual machine in demo deployment 🙂 ).

Here I am using two virtual machines, but you can create one virtual machine and use your host machine as the other one. The hostnames of the master and slave in my setup are puppet-master and puppet-agent.

Make sure both the machines are ping-able from each other — by their IP as well as hostnames (e.g. ping and ping puppet-master). Make sure your /etc/hosts file looks something like this to achieve that:

( and are the IP addresses of externally-visible interfaces of hosts puppet-master and puppet-agent respectively)


r@puppet-master:~$ cat /etc/hosts   localhost   puppet-master  puppet-agent


r@puppet-agent:~$ cat /etc/hosts   localhost   puppet-agent  puppet-master

Getting the hands dirty — Puppet CLI

Install puppetmaster package on the master node

sudo apt-get install puppetmaster

List all the users on the current system:

puppet resource user –list

So basically a ‘user’ is a ‘resource’ in Puppet terminology. Now only list a specific resource. r is the current user in my case.

r@puppet-master:~$ puppet resource user r

user { ‘r’:

ensure  => ‘present’,

comment => ‘r,,,’,

gid     => ‘1000’,

groups  => [‘adm’, ‘cdrom’, ‘sudo’, ‘dip’, ‘plugdev’, ‘lpadmin’, ‘sambashare’],

home    => ‘/home/r’,

shell   => ‘/bin/bash’,

uid     => ‘1000’,


Notice the syntax. Resource ‘r’ is of type ‘user’, with ‘ensure’, ‘comment’, etc as keys/attributes, and ‘present’, ‘r,,,’ as values for those attributes.

You can change the value using the Puppet CLI

r@puppet-master:~$ sudo puppet resource user r comment=’some text missing’

notice: /User[r]/comment: comment changed ‘r,,,’ to ‘some text missing’

user { ‘r’:

ensure  => ‘present’,

comment => ‘some text missing’,


Create a new user with specified key-value pairs

r@puppet-master:~$ sudo puppet resource user katie ensure=present shell=/bin/bash

notice: /User[katie]/ensure: created

user { ‘katie’:

ensure => ‘present’,

shell  => ‘/bin/bash’,


r@puppet-master:~$ sudo puppet resource user katie

user { ‘katie’:

ensure           => ‘present’,

gid              => ‘1001’,

home             => ‘/home/katie’,

password         => ‘!’,

password_max_age => ‘99999’,

password_min_age => ‘0’,

shell            => ‘/bin/bash’,

uid              => ‘1001’,


Remove the newly created user, but this time, let’s put this information into a file katie_remove.pp and ask Puppet to ‘apply’ this file and thus removing the user ‘katie’.

r@puppet-master:~$ cat katie_remove.pp

user {‘katie’:

ensure => absent,


Apply this Puppet manifest

r@puppet-master:~$ sudo puppet apply katie_absent.pp

warning: Could not retrieve fact fqdn

notice: /Stage[main]//User[katie]/ensure: removed

notice: Finished catalog run in 0.47 seconds

Puppet’s description of user ‘katie’:

r@puppet-master:~$ sudo puppet resource user katie

user { ‘katie’:

ensure => ‘absent’,


is now same as that of a non-existent user.

r@puppet-master:~$ sudo puppet resource user absent-user

user { ‘absent-user’:

ensure => ‘absent’,


That is, the user ‘katie’ is now deleted. You can see that the ‘ensure’ attribute can be used to make sure a user (or in general, any ‘resource’, is present, or absent).

Note: Ignore the warning which is printed while applying a manifest from a file. Or if you are bothered by it popping up all the time, in the /etc/hosts file, change   puppet-master

to   puppet-master.pandy.com puppet-master

where you can choose a domain name of your own choice in place of .pandy.com.

Puppet modules

Note: puppet module doesn’t work on Precise (Ubuntu 12.04). You need to install ruby, and gems, etc. Too much of a hassle. So I’ll just post commands here which work for a higher version of Ubuntu.

Install standard library:

sudo puppet module install puppetlabs/stdlib

View all the installed modules

r@puppet-master:~$ sudo puppet module list


├── puppetlabs-mysql (v2.2.1)

├── puppetlabs-ntp (v3.0.2)

└── puppetlabs-stdlib (v4.1.0)

/usr/share/puppet/modules (no modules installed)

All the modules, and all other information in the system goes in /etc/puppet directory.

Note: Modules installed via sudo will be visible when you perform puppet module list with sudo only. Same for non-sudo use.

Puppet in master-client configuration

Everything we did so far concerned with a single machine. Let’s now introduce another machine — Puppet agent.

Note that you need to set FQDNs for both the machines. See the step above, where we suppressed a warning.

First, we’ll need to install puppet package (the agent) on the agent node.

sudo apt-get install puppet

By default, the Puppet agent service will not be running.

r@puppet-agent:~$ sudo service puppet status

* agent is not running

Before starting it, change START=no to START=yes in /etc/default/puppet file, to start the agent service by default when the system starts/reboots.

sudo sed -i s/START=no/START=yes/g /etc/default/puppet

And add these two lines at the end of /etc/puppet/puppet.conf to allow the agent to discover the master by its FQDN.


server = puppet-master

Now start the Puppet agent service

r@puppet-agent:~$ sudo service puppet start

* Starting puppet agent                                   [ OK ]

I also make sure that clocks of both the machines are synchronized by running ntpdate on both master and slave. I am not sure if this is required, but doesn’t do any harm. sudo ntpdate pool.ntp.org

Now the master needs to sign the certs by agent.

Execute this command on agent node.

sudo puppet agent –test –waitforcert 60

Now hop over to the master node, and retrieve the list of certs waiting to be signed

r@puppet-master:~$ sudo puppet cert –list

“puppet-agent.pandy.com” (EB:0F:E4:14:6F:B2:7E:85:7E:21:26:C4:78:80:58:E1)

Sign the cert

r@puppet-master:~$ sudo puppet cert sign puppet-agent.pandy.com

notice: Signed certificate request for puppet-agent.pandy.com

notice: Removing file Puppet::SSL::CertificateRequest puppet-agent.pandy.com at ‘/var/lib/puppet/ssl/ca/requests/puppet-agent.pandy.com.pem’

Now we are ready to go. Let’s create a file (‘Puppet manifest’) on master where we write that: 1. We want apache package to be installed. 2. Once we ensure that the package is installed, we want to start the apache service. We’ll name the file site.pp, which is the ‘main’ configuration file for Puppet. We’ll put it into /etc/puppet/manifests directory. Note how we can specify a dependency between resources.

package { ‘apache2’:

ensure => installed


service { ‘apache2’:

ensure => true,

enable => true,

require => Package[‘apache2’]


Puppet works on ‘push’ model, meaning configurations are pulled by agents at periodic intervals. I think the default periodic interval is 30 minutes. Alternatively, you can pull from agent at your own will, any time. Let’s do that now. Execute this command on the agent:

r@puppet-agent:~$ sudo puppet agent –test

info: Caching catalog for puppet-agent.pandy.com

info: Applying configuration version ‘1397343482’

notice: /Stage[main]//Package[apache2]/ensure: ensure changed ‘purged’ to ‘present’

notice: Finished catalog run in 6.30 seconds

And you can see the apache server running!

r@puppet-agent:~$ sudo service apache2 status

Apache2 is running (pid 5874).

Ta! Da!

Please comment if you have any ideas to make this post easier for the newbies to understand.



This is just a quick start guide. There are excellent resources and docs at puppetlabs.com. I have their beginner’s PDF saved in my DropBox. Around 80 pages long, it covers almost every aspect of basic Puppet. The only problem with this guide is it is (I think deliberately) made to work only with the Enterprise Puppet version, but you can always refer back to this post to know how to set the open source version 🙂

If you mess up the cert signing process, here is a quick and dirty way to get it resolved. On master:

sudo puppet cert clean puppet-agent.pandy.com

On both master and slave: sudo rm -r /var/lib/puppet/ssl sudo service puppet restart

Thats all the Puppet set up has been done easily, Thanks for reading !!

Openstack with Devstack

Getting started

Install git

sudo apt-get install git

Clone the DevStack repository into your computer and cd into it. This is the code which will set up the cloud for you.

git clone http://github.com/openstack-dev/devstack
cd devstack/

If you do a ls, you will see stack.sh, unstack.sh and rejoin-stack.sh files in there. These are the most important files.

r@ra:~/devstack$ ls
accrc         exercises         HACKING.rst  rejoin-stack.sh  tests
AUTHORS       exercise.sh       lib          run_tests.sh     tools
clean.sh      extras.d          LICENSE      samples          unstack.sh
driver_certs  files             localrc      stackrc
eucarc        functions         openrc       stack-screenrc
exerciserc    functions-common  README.md    stack.sh

File stack.sh is the most important of them all. Running this script will: 1. Pull OpenStack code from all of it’s important projects’ repositories and put them in /opt/stack directory. TODO: say that this directory is configurable. 2. Installs all the dependencies these OpenStack projects have — both in the form of Ubuntu packages, and also the Python “PIP” repositories. 3. Starts all the OpenStack services with a default configuration.

Bringing down the DevStack-created cloud is easy too — just invoke the unstack.sh script, and all the services are down again, freeing up the memory that these services consume. I’ll talk about rejoin-stack.sh in some time. Let’s get started before I start writing at lengths again 🙂

Execute the stack.sh script

r@ra:~/devstack$ ./stack.sh

This value will be written to your localrc file so you don't have to enter it 
again.  Use only alphanumeric characters.
If you leave this blank, a random default value will be used.
Enter a password now:

You need to add the MySQL database password here. Don’t worry if you have not installed MySQL on this system. Just provide a password here and this script will install MySQL and use this password there.

As you can see, MySQL is where all the important data is stored by different OpenStack components. You can peep into the database later if you want to see what data is stored, etc.

Also, note the first line after the heading. If the stack.sh script finishes successfully, all the inputs you specify (this, and four more after this) will be written to a file named as localrc. All the local configuration setting pertaining to the DevStack environment will go in this file. I’ll provide you with details of all of them very soon. Have patience 🙂

For the other four prompts, enter ‘nova’. Just use ‘nova’ for this MySQL prompt too if it is not installed yet.

You will see that the script now starts spewing a lot of output on our screen. It is downloading all the required code, packages, dependencies, etc, and setting everything up for us — databases, services, network, configurations, message queues. Pretty much everything. For the first time, the script might take about 30-minutes, but it again depends upon the speed of your Internet connection, and the processing speed of your virtual machine. From the next time, it can provide you with a cloud in less than 10 minutes!

If the script ends with something like this:

+ merge_config_group /home/r/devstack/local.conf post-extra
+ local localfile=/home/r/devstack/local.conf
+ shift
+ local matchgroups=post-extra
+ [[ -r /home/r/devstack/local.conf ]]
+ return 0
+ [[ -x /home/r/devstack/local.sh ]]
+ service_check
+ local service
+ local failures
+ SERVICE_DIR=/opt/stack/status
+ [[ ! -d /opt/stack/status/stack ]]
++ ls '/opt/stack/status/stack/*.failure'
++ /bin/true
+ failures=
+ '[' -n '' ']'
+ set +o xtrace

Horizon is now available at
Keystone is serving at
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: nova
This is your host ip:
stack.sh completed in 269 seconds.

That means your machine is now home to a Cloud! 🙂

Here, is the IP of my first network interface. Don’t worry about that for now.

So now you can head over to my blog Cinder on DevStack – Quick Start to get started with creating volumes (persistent storage in cloud) with Cinder — OpenStack’s block-storage project. In that guide, you will also be creating a virtual machine, so it will be a good start to OpenStack. But let’s get back in our current scope.

You can type the host IP provided by the script into your browser, to access the dashboard ‘Horizon’. Log into it using username ‘admin’, or ‘demo’ and password ‘nova’. (For simplicity’s sake, lets just assume there are two users who are allowed to access this cloud — one has all the administrative privilages, and the other one is just a normal user).

You can view all the process logs inside screen, by typing:

screen -x

Head over to Linux Screens in DevStack for more information on how to work with screen.

Housekeeping and customizations

In your life as an OpenStack developer, you will be setting up and destroying DevStack instance quite a number of times. So it is good to know how to do that in the most efficient manner.

Just like stack.sh script is used to set up DevStack, unstack.sh is used to destroy the DevStack setup. Running it will kill all the services, BUT it will not delete any of the code. If you want to bring down all the services manually, just do a:

sudo killall screen

Note that this will just kill all the process which were running, for which you were able to see the logs inside screen. unstack.sh does some cleanups as well along with killing processes.

If you had previously run ./stack.sh, but have brought down the environment, you can bring it up back by executing the rejoin_stack.sh script.

NOTE: DevStack environment doesn’t persist across reboots!

So you need to bring back up the DevStack environment manually everytime you reboot. Here is where using a virtual machine comes handy. You can take a snapshot of the virtual machine, and then go back to it when you want a clean DevStack environment.

Nonetheless, the best way to reboot is: first execute unstack.sh to bring down the current running DevStack instance. Then reboot, and when your machine comes up again, run rejoin_stack.sh. If you don’t run unstack.sh, you will need to execute stack.sh again to have the environment up.

localrc configurations

localrc is the file where all the local configurations (local = your local machine) are kept.

After first successful stack.sh run, will see that a localrc file gets created with the configuration values you specified while running that script.

$ cat localrc 

Sometimes you will forget to unstack, and will reboot the machine. And then you will find that running stack.sh will again do an apt-get update, and check for all packages, etc.

If you specify an option OFFLINE=True in a file named localrc, inside the devstack directory, and if after specifying this you run stack.sh, it will not check anything over the Internet, and will set up DevStack using all the packages and code residing in your machine. Setting up a DevStack using this config option will give you a running cloud in the shortest amount of time (after rejoin_stack.sh, but you have already forgotten to do unstack.sh, right 🙂 ).

Note that stack.sh will see if the git repositories of the OpenStack projects are present in /opt/stack/ directory. If they are, it will not fetch any latest code into them from Github. But if any of the directory (say, nova), is absent, it will pull latest code into the newly created novadirectory inside /opt/stack.

What if you want to get the latest code into all the OpenStack repositories inside /opt/stack? Just specify a RECLONE=yes parameter in localrc, and rerun ./stack.sh. This comes particularly handy when you are developing new code.

NOTE: Keep in mind that while developing code, you need to commit your local changes in, say, /opt/stack/nova repository, before you restack (re-run stack.sh) with RECLONE=yesoption, as otherwise, the changes will be wiped off. Save yourself from a rude shock. You have been warned.

Configuration options RECLONE=yes and OFFLINE=True are complementary, and hence, use only one of them at a time in localrc.

If you have more than one interfaces, you can specify which one to use for external IP using this configuration:


Developing code

If you want to immediately test out your code by running it inside DevStack, you need to make the changes in the code, and restart the affected services.

For example, let us say you are making code changes in nova. Just after you are done making the changes, go to the screen, and restart all the services which start with “n-“. If you are very sure that only one of the Nova service is affected, just restart that. Or if you don’t know which one to restart, it is safe to restart all of them.

In order to restart, go to the respective screen and press CTRL+C. Then, press the up arrow once to get the last command which started this screen session, and then press ENTER.

Final words

Note that this guide just gets you started with OpenStack using DevStack. OpenStack, and cloud in general, is not about virtual machines or volumes or networks only. It is a philosophy. It is a complete paradigm shift, and as such, it is impossible to cover all of it by me. Your quest to know more about it has just started. Keep reading more and more about it and I guarantee you you will be fascinated by it’s limitless possibilities.

This post is written keeping in mind that it will be consumed by a newbie to OpenStack development. If you are one of the one benefitting from this guide, I would love it if you can provide me with suggestions to improve this post, and any feedback you have about it.

Now you can go to the DevStack website 🙂