It seems that each time that I re-visit doing things with containers that I discover something new and curiously wonderful.
I totally understand why folks get container excited, and why developers love them so much, and why operations folks should totally love Hyper-V containers, and, and... There I go getting excited.
If you have read my previous posts of containers I tired to relay some conceptual ideas of what a 'container' actually is. It is not a process, it is not a VM, it is not a session. It is a little bit of all of them which is what makes describing a container not a straight forward thing.
And, you will recall that a container is the running state of an image. And an image is a flat file structure that represents the application and everything it needs to run.
A container is more than a running service copy of an image. It is that image, plus all the settings you gave it when you told Docker to run - create container 'foo' from image 'bar' with all of these settings.
The Docker tutorials really don't cover this well. They just toss you out there and say, pull this image, run a container from this image, look - you did it. Conceptually, there is a lot happening that Docker abstracts away, saves for you, manages for you (which is why folks have caught onto Docker).
All of those settings that you give are meta information that define that container.
After you run that container (with all of those settings defined) you can simply stop it. Then when you start it later, all of those run parameters that you defined are magically applied out of the configuration - you never have to define all of those parameters again.
If you then stop your container and then commit that container to a new image, all of that meta information is saved.
If you inspect a container or an image you can see all of this meta information that defines what happens when that container is started or that image is run.
Then, if you share this image and someone else runs it, they get all of your defined configuration applied.
Let me put all of this together with a simple walkthrough.
First: run a container.
sudo docker run -it ubuntu:latest /bin/bash
Breaking that command back apart:
run an instance of a container (with a random name and id), interactively, using the Ubuntu image (from the Docker Hub) of the latest version, then run the bash shell application.
The prompt that you get back is root@
Second: stop that container
sudo docker stop
While that container ran, anything you did was persisted within its file system.
Third: list all containers
sudo docker ps -a
The Container is the runtime process. To see ones that are not running you add the all switch.
Fourth: start that container
sudo docker start
notice that the image and command to run did not have to be defined. But I did not define how to connect to the process. That is what -it does. So it is now running in the background. Stop it again and add -it before the container id and you are back in.
Then stop it again before the next step.
If you wanted to see that your commands were in there, just inspect the container.
sudo docker inspect
Fifth: commit that container to an image
sudo docker commit
Now, you can create duplicates of your container by running container instances of your image.
And, you an use inspect against the images as well.
And there you have it.
In the next container post, I am going to use a real world application in a container and discuss configurations and variables.