Wednesday, July 22, 2015

Docker Containers to Images

I am still learning containers and what Docker gives to aide in managing containers.
It seems that each time that I re-visit doing things with containers that I discover something new and curiously wonderful.

I totally understand why folks get container excited, and why developers love them so much, and why operations folks should totally love Hyper-V containers, and, and...  There I go getting excited.

If you have read my previous posts of containers I tired to relay some conceptual ideas of what a 'container' actually is.  It is not a process, it is not a VM, it is not a session.  It is a little bit of all of them which is what makes describing a container not a straight forward thing.

And, you will recall that a container is the running state of an image.  And an image is a flat file structure that represents the application and everything it needs to run.

A container is more than a running service copy of an image.  It is that image, plus all the settings you gave it when you told Docker to run - create container 'foo' from image 'bar' with all of these settings.

The Docker tutorials really don't cover this well.  They just toss you out there and say, pull this image, run a container from this image, look - you did it.  Conceptually, there is a lot happening that Docker abstracts away, saves for you, manages for you (which is why folks have caught onto Docker).

All of those settings that you give are meta information that define that container.

After you run that container (with all of those settings defined) you can simply stop it.  Then when you start it later, all of those run parameters that you defined are magically applied out of the configuration - you never have to define all of those parameters again.

If you then stop your container and then commit that container to a new image, all of that meta information is saved.

If you inspect a container or an image you can see all of this meta information that defines what happens when that container is started or that image is run.

Then, if you share this image and someone else runs it, they get all of your defined configuration applied.

Let me put all of this together with a simple walkthrough.

First: run a container.
sudo docker run -it ubuntu:latest /bin/bash

Breaking that command back apart:
run an instance of a container (with a random name and id), interactively, using the Ubuntu image (from the Docker Hub) of the latest version, then run the bash shell application.
The prompt that you get back is root@

Second: stop that container
sudo docker stop

While that container ran, anything you did was persisted within its file system.

Third: list all containers
sudo docker ps -a

The Container is the runtime process.  To see ones that are not running you add the all switch.

Fourth: start that container
sudo docker start

notice that the image and command to run did not have to be defined.  But I did not define how to connect to the process.  That is what -it does.  So it is now running in the background.  Stop it again and add -it before the container id and you are back in.

Then stop it again before the next step.

If you wanted to see that your commands were in there, just inspect the container.
sudo docker inspect

Fifth: commit that container to an image
sudo docker commit name:version

Now, you can create duplicates of your container by running container instances of your image.

And, you an use inspect against the images as well.

And there you have it.

In the next container post, I am going to use a real world application in a container and discuss configurations and variables.


Tuesday, July 21, 2015

Disabling that XfinityWifi home hot spot wireless interference

I got into a wireless networking troubleshooting a few years back as a Field Technical Advisor for First Technology Challenge with the FIRST organization.

The system that they had at the time (recently replaced) was based on 802.11b using a relatively low powered device on the robot that was named Samantha.  John Toebes was the brain behind Samantha, and at the time she was a revolutionary step over using Bluetooth to control the FTC robots.

Being 802.11b based, she would work with most any 2.4ghz router on the market (in theory, and there are interesting reasons why she didn't work with them all).  The other thing about 802.11b is that it was early in the standards for wireless, before it got really popular and the ways to deal with interference got much smarter.

As the spectrum becomes more crowded, signals are pushed out.  If someone is streaming something (like video) that signal actually gets precedence in the local airwaves.  In other words, activity in the local airspace interferes with other activity in the local airspace.

Why am I digressing so far from the title of the post?  I recently read through an article by Xfinity: "10 ways you could be killing your home Wi-Fi signal"

It has a number of harmless suggestions, such as:  Get your router off the floor, put it on the floor where there is the most use, don't put it in windows or behind your TV, etc.

All of the suggestions are about maintaining line of sight with your router.  Frankly, advice that we gave long before home Wi-Fi routers got to the great performance levels that they are at today.

Not once do they mention interference with other wireless signals.  Maybe because they (xfinity) create one of the biggest problems with their own xfinitywifi open access point.

I have had all kinds of trouble with xfinity wireless throughput since they started this open wifi program.  I have changed routers, purchased my own cable modems, moved up to low end professional equipment, replaced the splitters on the cable, used di-electric grease on the outside cable junctions, etc.

I got the performance to the point where when I was wired, I got the full throughput that we paid for.  But as soon as I went wireless I got 1/4 of the throughput.  It made no sense.  Especially since we used to have far better throughput on wireless.

Since I run my own router, I don't use the open wifi connection that Xfinity forces on to you.  Needless to say, I just don't trust them.

Believe it or not, they let you self service turn that off.  So you can be sure that your neighbors are not sponging off the bandwidth that you pay good money for (they can be beholden to the great Comcast too if they really want broadband).

Anyway, thanks for reading all this.  But I know what you really wan is this link: http://customer.xfinity.com/help-and-support/internet/disable-xfinity-wifi-home-hotspot

And just in case they move it or something else, I am going to copy it as well:

  1. Navigate to https://customer.xfinity.com/WifiHotspot. This site can also be reached by following these steps: 
    • Navigate to the My Services section of My Account.
    • Under the XFINITY Internet tab, click the Manage your home hotspot link.
  2. A new window appears indicating, "If you choose to enable your XFINITY WiFi Hotspot feature, a separate network called ‘xfinity wifi’ will be created for your guests - at no additional charge. Never give out your home network password again, so your private WiFi network will always remain secure. Learn more".
  3. Under the Manage XFINITY WiFi Home Hotspot option, if your wireless gateway is enabled with the Home Hotspot feature, the Enable my XFINITY WiFi Home Hotspot feature radio button will be pre-selected. If your Home Hotspot feature is disabled, the Disable my XFINITY WiFi Home Hotspot feature radio button will be pre-selected.
  4. To enable or disable the feature, choose the Enable my XFINITY WiFi Home Hotspot feature radio button or the Disable my XFINITY WiFi Home Hotspot feature radio button.
  5. Click Save.
    • Disabling the feature takes effect within a few minutes.
    • However, enabling the device will take up to 24 hours.
  6. You will be presented with a confirmation message at the top of the My Services page that says, "Thank you! Your hotspot has now been disabled."

Monday, July 13, 2015

Identifying and running workflows with the Octoblu API and PowerShell

If you are not familiar with Octoblu; it is an IoT messaging system, a protocol translation system, and a message transformer for IoT all rolled into one product.

Since last year I have been spending quite a bit of my time with their systems and platform.
Everything in their system is a device, your user account, a software agent that we demonstrated on the Synergy 2015 stage day 2 keynote, each node that you see in the designer, even that running workflow.  They are all devices and they all have messages bouncing around between them.

One thing that I have come to rely on are their workflows.  I use the flows as a message router / message translator.
By that I mean that I formulate a JSON message and send that to some endpoint (I frequently send a REST message to a trigger using PowerShell).  And the flow will do something with that message - it might change it, filter it, route it to one or many other devices (seen in the flow designer as 'nodes').

All that said, I will probably post about sending and transposing messages later.  It is actually one of the fundamental things that any device does in the IoT world.
I am pretty loose with the concept of what a 'device' is: it can be the Arduino that I have on my desk that runs Microblu, it can be a Node.js application that I run on my workstation, it can be a personal interface to Github (the Github node in Octoblu).  A device is anything that can either send or receive a message.

Back to the title of this post.

I just finished running a long duration test against a 'device' and during this test I wanted to ensure that my workflow remained running.

When you begin to rely on workflows you realize that it is a cloud service and things happen. Sometimes flows get knocked offline.
Over time I have dreamed up a couple approaches to evaluating flows from a 'health' perspective. One of them (my v1) I am using as my base for this post.

This is a really simple approach; make an API call that determines if s flow is running.
If it isn't running, I start it.  Simple as that.

The complexity comes with two different APIs being involved; as there are two different systems of the service at play. 
There is the Octoblu API - this is the Octoblu designer and the GUI and those pretty things that you visually interact with.
And there is the Meshblu API - this guy is the messaging meat of the infrastructure.  He handles routing, security, and devices.  When a flow is run for the first time it becomes instantiated over on Meshblu and becomes a device of the ecosystem.

The code is in my Github Octoblu PowerShell repo here: https://github.com/brianehlert/OctoPosh
The particular script behind this post is: "FlowWatcher.ps1"

Though I have comments in my script allow me to describe a bit more of what is happening.

Invoke-RestMethod -URI ("http://meshblu.octoblu.com/devices/" + $flowId) -Headers $meAuthHeader -Method Get

This is a Meshblu API call to fetch the properties of an individual device.  Note the $flowId GUID string in the URI path.  Leave that GUID out and you get back an array of all of the devices that you 'own'.

Invoke-RestMethod -URI ("https://app.octoblu.com/api/flows/" + $flowId) -Headers $meAuthHeader -Method Get

This is an Octoblu API call to fetch an individual flow / workflow.  Just as happens if you open one in the designer, you get all of its properties.

Invoke-RestMethod -URI ("https://app.octoblu.com/api/flows/" + $flowId + "/instance") -Headers $meAuthHeader -Method Post

This is another Octoblu API call to start a flow.  What happens is that a flow device instance gets instantiated in Meshblu (this way it can receive messages).  This is why I call the Meshblu API to see if it is 'running'.

Invoke-RestMethod -URI ("https://app.octoblu.com/api/flows/" + $flowId + "/instance") -Headers $meAuthHeader -Method Delete

This is a Meshblu API call to stop flow.  What it does is delete the running instance of the device.  If you query this particular device in Meshblu (after you have run it once) you will find it in Meshblu, but it may not be running.  If it is running, it is a little process within the infrastructure, when not running it is still defined as a device.

I hope you find the script and this little API tutorial to be useful.

Thursday, July 9, 2015

Indexing Format-Table output

Today, I had the crazy idea of outputting an array in PowerShell as a table, and I wanted to show the index of each array value.

In laymen's terms: I wanted my table output to be line numbered.  And I wanted to line numbers to correspond to the position in the array.

Why? because I didn't want the user to type in a name string or a GUID string that they might typo, they could simply enter the index of the item(s).
Trying to solve potential problems upfront, without a bunch of error handling code.

I started out with a PowerShell array that looked something like this:

PS\> $allFlows | ft name, flowid -AutoSize

name                     flowId
----                     ------
bjeDemoFlow_working      70bd3881-8224-11e4-8019-f97967ce66a8
bje_cmdblu               3e155fe0-dc9a-11e4-9dfc-f7587e2f6b74
Pulser_WorkerFlow_Sample f945f94f-fb33-4181-864d-042548497270
Flow d59ae1e8            d59ae1e8-0220-4fd2-b40f-fba971c9cf42
bjeConnectTheDots.io     204b5897-2182-4aef-84fe-1251f1d4943b
StageFlow_1              796d0ff4-94d6-4d1a-b580-f83ab98c7e15
Flow f26aab2f            f26aab2f-783b-4c09-b1fc-9e6433e8ab37
Flow c983c204            c983c204-5a87-4947-9bd2-435ac727908a
v2VDA Test               ba5f77af-98d1-4651-8c35-c502a72ccea8
Demo_WorkerFlow          e7efdac4-663d-4fb6-9b29-3a13aac5fb97


Now for the strange part.  How do I number the lines in a way that they correspond to each items position in the array?

Search did not fail me today, but it took a bit of effort to discover an answer in StackOverflow from PowerShell MVP Keith Hill.
And, also looking at Get-Help Format-Table -Examples and realizing that there is an 'expression' option to calculate the value of a field in the table output.

PS\> $allFlows | ft @{Label="number"; Expression={ [array]::IndexOf($allFlows, $_) }}, name, flowid -AutoSize

number name                     flowId
------ ----                     ------
0      bjeDemoFlow_working      70bd3881-8224-11e4-8019-f97967ce66a8
1      bje_cmdblu               3e155fe0-dc9a-11e4-9dfc-f7587e2f6b74
2      Pulser_WorkerFlow_Sample f945f94f-fb33-4181-864d-042548497270
3      Flow d59ae1e8            d59ae1e8-0220-4fd2-b40f-fba971c9cf42
4      bjeConnectTheDots.io     204b5897-2182-4aef-84fe-1251f1d4943b
5      StageFlow_1              796d0ff4-94d6-4d1a-b580-f83ab98c7e15
6      Flow f26aab2f            f26aab2f-783b-4c09-b1fc-9e6433e8ab37
7      Flow c983c204            c983c204-5a87-4947-9bd2-435ac727908a
8      v2VDA Test               ba5f77af-98d1-4651-8c35-c502a72ccea8
9      Demo_WorkerFlow          e7efdac4-663d-4fb6-9b29-3a13aac5fb97


The values for the column are defined as a hashtable @{}
With the Label of the column and the Expression that defines the value.

Pretty nifty new trick to add to my repertoire.












 
 

Wednesday, July 1, 2015

A tale of containers

Containers.  It is a word that we keep hearing about lately.
And in the popular vernacular a container refers to a "docker style container".

You say: "But Docker doesn't do containers"  And, you are right.
These containers are what was originally known as (and still are) LXC containers and everyone associates with Docker
Docker is not the container technology, Docker is container management and a container ecosystem.  They only made containers easy.

Now, in the virtualization world folks have used this container word for a long time.  It has been used to describe the isolation models themselves.
I really wish we had a better word for this type of container, other than 'container'.

With the modern Windows OS we have:
  • Process containers: this is a process, it runs in its own memory space, it inherits a security context from either a user or the system, and it shares all aspects of the OS resources.  If it has a TCP listener, it must be unique so it does not conflict with others, it has to use RAM nicely or it overruns other processes, and so on.
  • Session containers: This is a user session.  Enabled by the multi-user kernel.  A session is a user security context container and within it are processes.  The user is the security boundary.
  • machine containers: This is a virtual machine.  It can be likened to a bare metal installation.  It is very heavy weight in that it is an entire installation.  Within it run session containers and process containers.  It is a very hard security boundary.  It has a networking stack, it does not share resources (file system, RAM, CPU) but it can consume shared resources when running on a hypervisor.

Now, 'container' containers.

A container is a bounded process that can contain processes. 
A container is a file system boundary. 
And, a container has its own networking stack.
A container shares the kernel and other processes with the machine on which it runs.

The processes in one container cannot see the process in another container.
Container processes interact with each other through the networking stack, just like applications on different machines are required to.

But, to be technical with the language; only the running process is a 'container'.  When it is not running it is a container image.
And a container image is similar to an OS image.  It has kernel, bootloader, files, and applications.

Now lets complex-ify all of this.

Linux currently has one type of container, LXC.

Windows is actually introducing two types of containers.
  • Windows containers - this type of container runs like a process on your workstation.  It consumes your available RAM and CPU and a folder full of files.  It smells like any application process, except; it has a network stack, and it cannot directly interact with other processes, it can only see its folder on the file system.  It is a process in a can.  Hence, container.
  • Hyper-V containers  - this type of container is just like the one above but with a more solid isolation boundary.  It gets the benefit of hypervisor CPU and RAM management (fair share), it is forced to play well as a process.  And, its meets isolation compliance standards just like a VM does.  No shared OS, the image contains the kernel.
The difference between the two is only the container (remember that a container only is when a container image is running).  You could think of the difference as two runtime options for a container image.  You can run it at your Windows workstation (Ms. Developer) or you can deploy it to a Hyper-V Server (Mr. Operations).  Between the two, there is a fit for your deployment requirements.

Images are another interesting aspect of containers.

If you have played with installing an application with Docker (such as creating a Docker build file) you begin with a base OS (preferably from a trusted source such as Canonical for Ubuntu).  Then you layer on OS settings, and application downloads and installations.

In the end, you have this image.  And this image is made up of chained folders, similar to the idea of checkpoints (VM snapshots or differencing disks).

However, in the container world, it is files and a file system.  No virtual block devices as is said in virtualization circles.  A virtual block device is a representation of a hard drive block layout.  It is literally raw blocks, just like a hard drive.

Now, does this mean that since Canonical produces a 'docker' image for Ubuntu, that Microsoft will produce a 'docker' image for Windows Server?  Most likely in some form.

Nano Server would make a neat base container image, Server Core as well.
Shell based applications would be a bit hairier.  And a considerably larger base image since you have all of that Windows Shell in there.

But remember, a container image is a file base system.  Now, just think about maintaining that image.  The potential of swapping out one of the layers of the image to add an OS patch, or an application update.  Not having to destroy, update, and deploy.

Oh, so exciting!