Thursday, February 26, 2015

Doing something simple with Docker

A couple posts back I walked you through setting up an Ubuntu VM, and installing the latest version of Docker.  Then, I left you hanging.

Docker is interesting.  These container things are a cross between a VM and a user process.  There is still a base OS there (of some type) to bootstrap the application.  What I thought was interesting when I first poked Docker is that each container is a network isolation zone.

Docker has a great tutorial: https://www.docker.com/tryit/

And that was fine.  I did something.  But I really didn't understand it until I tried to really use it.

What does it take to get an application into an image and run it?  And this Docker Hub that is chock full of images, and Dockerfile - what is that?

Lets begin with an easy example as we get the language down.

I want to run a Docker container.  I want the OS in this Docker container to be Ubuntu (yes, Ubuntu within Ubuntu).

Returning to my Ubuntu VM from before, I logon as my user and try a couple Docker commands:
sudo docker images  - this lists the Docker images that have been built / downloaded to this machine and these images are used to run containers.

Notice that language - a container is a running instance of an image.  The image is analogous to a virtual disk with something in it.  The image consumes space on the disk of my Ubuntu VM.

sudo docker ps - If you have been around Linux before you have run across ps - processes.  The ps command lists the containers, and containers being processes only exist when they run.

Enough of that, lets get confusing and run an instance of Ubuntu on Ubuntu in the same command window where I ran my Docker command (the key here, watch the bouncing command prompt).

sudo docker run -i -t ubuntu:latest /bin/bash

Like most commands, lets read this on from right to left (not left to right).
Run a BASH shell, in the image 'ubuntu:latest', run a tty session, keep STDIN open (send input).
What this accomplishes is: the image is checked if it is local, if not it is pulled from the hub.  Then open the tty in the console session (where I ran the command) and run bash.

Notice when you do this that your prompt changed at your console.  That console window is now the container process.  What you do now is inside the container process and image.

I you really want to realize that you are somewhere else, type ifconfig at the prompt.  By default you should get a class B address in the 172 range.  There will be more later on this, but right now that VM can get out, but there are no incoming ports open to it.

When you are ready to get out of the image use exit
This actually stops the container in this case.  Since it closes the tty.

Monday, February 23, 2015

Migrating VMs from Hyper-V 2008 or 2008 R2 to Hyper-V 2012 R2

There has been a recent explosion of questions around this in the Hyper-V TechNet forum over the past two weeks.  So I decided that I would blog about this a bit.

The primary question is: How can I migrate from Hyper-V 2008 (or Hyper-V 2008 R2) to Hyper-V 2012 R2.

There are lots of very well meaning folks that provide the advice of: "Export your VMs from your 2008 era Hyper-V Server and then Import those VMs to your 2012 R2 Hyper-V Server."

Obviously, they never tested this.  Because, IT DOES NOT WORK.

First of all, lets test this common advice:  Export a VM from Hyper-V 2008 / 2008 R2 and import direct to Hyper-V 2012 R2.

  1. Create your VM Export
  2. copy the export folder to a Hyper-V 2012 R2 system
  3. attempt to import.
You will instantly get this:  "Hyper-V did not find virtual machines to import from location"
And you look, and everything is right there in that folder.  What gives!

The next piece of well meaning advice is to create a new VM configuration using the existing VHD in that export folder. 
(this will work, but if you have snapshots you are screwed - all of that snapshot history is lost, and lots of folks connect to the incorrect virtual disk and freak out that years of history was lost.)
 
If you were going to do this in the first place, why not just copy out the VHDs and save yourself some effort and be done with it.  This is viable option 1.

Here is the option that many folks overlook / are not aware of (as it was a new feature of Hyper-V 2012 R2:

Copy the VM folder direct from the Hyper-V 2008 R2 system to the Hyper-V 2012 R2 system and Import. 

Hyper-V 2012 R2 reads the XML configuration and imports the VM asking you a couple questions to fix things up. 
This is viable option 2 (actually the easiest if you have additional hardware with Hyper-V 2012 R2 already built).

We could stop there but not to be left without choices;  you can in-place upgrade from your Hyper-V 2008 / 2008 R2 era system to Hyper-V 2012 and then again to Hyper-V 2012 R2.  This will update the VM configurations as you go, and you will be all good.  Now we have a viable option 3.

Suppose that all you have is a VM Export.  Then what? 
Remember that error message at the beginning; Hyper-V 2012 R2 cannot read the VM export from Hyper-V 2008 / 2008 R2.  Now, we have other options.

Take your VM folder that you exported from your Hyper-V 2008 R2 system and copy it to a Hyper-V 2012 system.  Then import.  Success!

Now what?  You want Hyper-V 2012 R2.  You have a few viable options to take this from Hyper-V 2012 to Hyper-V 2012 R2: 

In-place upgrade the Hyper-V 2012 system to Hyper-V 2012 R2.  This is viable option 4.
Export the VMs, then import them to your Hyper-V 2012 R2 system.  This is viable option 5.

Thinking out of the box, are there other options?

I am always assuming that you have backups of your systems.  And you have tested restoring those backups, and you know those backups are indeed good and useful.  This gives another option. 
Restore your VMs to the Hyper-V 2012 R2 system as new VMs.  This becomes viable option 6.

There you have it.  Six options to test and choose from.  All of which are considered supported. And will save you the panic of realizing that going straight from a Hyper-V 2008 / R2 VM Export to 2012 R2 will not work.





Thursday, February 12, 2015

Docker on Ubuntu on Hyper-V 2012 R2

I recently read through an MSDN article that described running Docker in a VM on Hyper-V.

Frankly, I was less than impressed at the complexity of the solution.  Especially since the concept here is not a huge leap.

The basic steps are:
  1. Build a VM on Hyper-V
  2. Install Docker into that VM
  3. Run containers in that VM
This achieves a couple things.
  • Your Docker containers are isolated within a VM. 
This is actually an important thing.  Docker has its own networking stack, but it also allows exposing the underlying storage to the VM to support things like databases and configurations or even updating source easily. 
The model here is one VM per tenant.  Thus forming that boundary and still getting the flexibility of both containers and VMs.
  • You can run the OS of your choice.
In my experimentation I have been using Ubuntu.  Mainly because it has good support, but primarily because they are right up to date with the kernel.  This gives me the latest Hyper-V support within that VM.

So, you want to setup Docker in a VM.  There are a few steps as I am outlining this in gory detail.  Here is goes:

  1. Install Ubuntu in the VM (14.04 LTS Server) or 14.10
  2. Add OpenSSH Server
  3. Determine IP
  4. Connect over SSH
  5. Update
    1. sudo apt-get update
  6. Upgrade the components (aka patch the OS)
    1. sudo apt-get upgrade -y
  7. Add Docker gpg key (that is 'qO' not 'qZero')
    1. sudo sh -c "wget -qO- https://get.docker.io/gpg | apt-key add -" 
  8. Update the app list
    1. sudo sh -c "echo deb http://get.docker.io/ubuntu docker main\ >> /etc/apt/sources.list.d/docker.list"
  9. Update the local apt repository after adding the docker reference
    1. sudo apt-get update
  10. Install (latest) Docker (on 12/15/14 this is 1.4.0)
    1. sudo apt-get install lxc-docker -y
Now you are ready to play with the magic of Containers.
 

Tuesday, February 10, 2015

I am tired of the Azure vs AWS argument

I had the pleasure this morning of catching up on some email and ran across a scathing comment about Azure.  How horrible it was, how it could never catch up, how it was all around inferior.  The context was in comparison to AWS.

Now, I have to say - I have never (not one) done anything with AWS.  But, I have talked to a lot of folks that love AWS and hate Azure (lots).  And I counter that with the fact that I have been working with legacy enterprise software on Azure since 2011.  And, those of us that work with Azure know that it is never done and it has constantly been evolving.

I am not here to say that Azure is better, I am here to say it is different.  And if you can't accept that a platform is different and learn about its differences, and adapt as you need to, then be the bigot.

Since this is an AWS thread, then the context here is Infrastructure as a Service.  And most folks that I know that hold AWS up as better are folks that have no desire to change the legacy ways and thinking that they already know.

I am still describing the MSFT model of VM Templates and machine composition.  Which SCVMM introduced in 2010.  The concept of taking a generalized OS image and specializing it on deployment.  And storing these properties as separate things, thus granting the ability to re-use, mix and match, and so forth.

SCVMM further extended this concept in 2012 with the introduction of Service Templates - now you can group machines, customize application tiers, and even install applications and thus build out an entire distributed enterprise application.  With one OS disk image.

I prototyped this with XenDesktop - building out a scalable deployment.  No custom templates for each role, no pre-installation of any software - it all happens on the fly. 

MSFT has been moving in this direction of machine composition - layering settings and applications onto an OS at deployment - since 2008.  Azure PaaS has done it forever and SCVMM brought the concept to the enterprise and features of Azure IaaS keep it moving.

Desired State Configuration is the latest supporting feature that enables this (and more).  I have a resource for XenDesktop to use with that as well.

My point, things are different now than 5 or 10 years ago in IT.  And the models and whitepapers and testing and legacy applications need to change along with it.

Now, back to AWS and Azure.  The only argument I ever hear are two; firewalling rules, and the composition / deployment process of AWS.

MSFT is on the way to handing the deployment stuff.  Firewalling?  That is a lazy argument in my mind.  I have invested time in hardening machines, properly setting firewalls rules in the OS, IPsec rules and the like.  This is harder than setting rules at the network layer, but just as effective.

Someone has to choose which platform to get into bed with.  And if the software folks can't get past the traditional datacenter style deployment to a modern cloud model of software development - then maybe the platform is not the problem.  Maybe the issue is an open-ness to new ideas and new ways of looking at old problems.

Monday, February 2, 2015

Testing and Checking in the cloud service era

Services.  DevOps. Cloud Services. Rapid prototyping. Start-up methodologies. Agile development.

These are all terms that we hear in the software business.  And they generally all point to what some folks consider a reduction in testing.  And what many customers consider a reduction in overall quality.

If you follow some of the big names in the software testing world ( Michael Bolton, Rex Black, James Bach, and others ) one theme that resonates with all of this is testing vs checking.

And in my experience what I see happening in this cloud service era and the rise of DevOps is that there has been a shift away from 'testing' and a greater focus on 'checking'. 
Some say that this makes the developer more accountable instead of them blaming test for missing it.  I don't argue that quality was always the developers responsibility anyway.

In my words, checking is more like unit testing.  Did this thing respond in the way it was designed to?  Everything is positive, everything is looking for the intended desired outcome.

Where testing is checking plus looking at and driving appropriate failures.  Not only did this thing do the positive action, but when I send a negative action - what happens.  Did it fall over, did it respond with a proper error, did something totally unexpected happen.  And then there are other studies; load, scaling, fuzzing, chaos monkey, etc.

In fact, I would argue that in this cloud service world, this shift to a greater emphasis on checking is actually a bad thing.  I tis good for development in that it gets more code out the door, and in theory more features and quicker fixes.  However, it ends up making for very fragile applications, and I know of few cloud applications that don't have a high number of dependencies on other cloudy services.

Now, why does this matter?  Because, customers have an emotional connection to your product.  Not a factual relationship with it.

Any small issue that blows up into a large issue, or a planned two hour outage that becomes a 4 to 8 hour outage impacts the feelings of the customer in regards to your service.  This impacts their perception of your quality.  As the customer sees quality as a value judgment.  No different than excellent service at a restaurant.  The more expensive the restaurant, the better everything must be.

This gets messy in the cloud service era.  Because of pricing competition.  The model is closer to that of a bank.  If a customer only consumes one of your services, it is easy to switch.  So you upsell them with more and more services and possibly take a loss in doing it.  This generates a type of lock-in as it becomes more and more difficult for a customer to leave and go somewhere else.  The tipping point.

Anyone who has been involved in IT purchasing decisions knows that software is brought into the enterprise through some corporate division.  Then it gets handed to IT to evaluate.  And in very rare cases, IT actually gets to give feedback or get the corporate division to consider alternate options.
The other side to this is that IT is the one doing the evaluation and choice.  When this is the case the judgment is all about the getting started experience - if I have to crack a manual to get started, something is not right.

Again, checking would focus on the binary functions of the software where testing should be looking at the overall experience.
Just something to keep in mind in this rapid software era.