Basic Docker Usage
Run an instance of alpine linux with ash.
docker run --rm -i -t alpine ash
This will also delete the container once you exit ash (--rm).
-i interactive
-t allocate a tty
start one that does not go away after you exit (saves it's state)
and give it a name (alpine_ash
).
docker run --name alpine_ash -i -t alpine ash
Leave ash with Ctrl-D or exit and start the container again then connect to the console.
docker start alpine_ash
docker attach alpine_ash
Now, that was simple right?
To perform start and attach in the same line you can do this:
docker start --attach --interactive alpine_ash
After attaching you may detech without killing the process using Ctrl-p then Ctrl-q. Why you would want that...
Deleting Old Containers and Images
List all containers, running or otherwise:
docker ps -a
Delete all containers (if they are running they will not be deleted):
docker rm $(docker ps -a -q)
List all images:
docker image ls
Delete all images (if they are in use by a container they will not be deleted):
docker image rm $(docker image ls -q)
Delete all untagged images (quite useful):
docker rmi $(docker images | grep '^<none>' | awk '{print $3}')
- docker userguide networking
- docker swarm details
- docker tutorial create swarm
- Firewall information can be found here
- Lots of information, mainly leads to more questions but useful nontheless
- Deploy an overlay with a distributed key service... like etcd
- Talking to etcd with curl
- Old stuff on lxc
- Newer lxc stuff
- Something interesting, search for lxc-attach
Basically what I have written here:
Escaping from the controlgroups and namespaces...
docker run -it --privileged --pid=host alpine:latest nsenter -t 1 -m -u -n -i sh
This is an odd command, it gives you access to the host. Who knows, you might want that sometimes. Possibly useful on Windows or OSX hosts where you are running in a VM?
Configuring Docker Swarm
Setup your host machines, I used Gentoo and it is nice and simple
to just emerge docker and configure your kernel. The Gentoo wiki
pages have some good information and a contributed script can be
found in /usr/share/docker/contrib/check-config.sh
.
First make sure that docker is started on all your machines, I used a set of VMs but a set of real machines would probably be better in a production scinario.
Configuring your cluster is quite easy, taking some information
from https://docs.docker.com/engine/swarm/swarm-mode/
we create
the master node with:
docker swarm init --advertise-addr <MANAGER-IP>
This is fine and will tell you how to add workers to the swarm and give you a pointer on how to add more managers. So, add some workers:
docker swarm join --token SWMTKN-1-668iatoqunvj48owsj9x1ijk0w2wyif8g2fttj0ijm42mi9qlc-5cp9li8ckbhaejny6vgtegk7r 192.168.52.238:2377
Naturally, you will have to replace the token with the token
reported by docker swarm init
.
After my initial setup my list looks like this:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
08h5cqem6yh9j8e5ric4t469g klaagia Ready Active
08zc5patvtgry9686d2lrjfrg woz Ready Active
2znzzta7afz7ijpxcsb0m74mk ruuma Ready Active
7p1asy8vx71sg5cq64rpgao1p * b3k Ready Active Leader
Creating a Docker Image from a Dockerfile
Now we have a swarm we would like to deploy services to it. I like simple things and so I want to make a simple ssh service which is completely useless in the real world but will serve as a good example and help show what happens when you try and connect to a node.
This is my Dockerfile which is saved in a directory called
alpine_sshd
:
FROM alpine:latest
RUN apk update && apk add openssh
RUN ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key -q -N ""
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
This is a really simple file and I don't think I need to explain what it is doing. To build the file so it may be used in docker:
docker build -t alpine_sshd alpine_sshd
You will notice that the key for ssh will be the same for every instance of this container. I will look at that later in this to show you how to run commands when a new container is created but for now we will just continue.
Test this to make sure it works in a nice simple stand alone style:
docker run -d -P --name test_sshd alpine_sshd
Then find out what port docker assigned to the container:
docker port test_sshd 22
0.0.0.0:40229
Once we have that we can ssh to the container (the password, if you
weren't paying attention, is screencast
):
ssh -p 40229 root@192.168.52.238
Or we can use the IP Address assigned to the container on port 22
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' test_sshd
172.17.0.2
then ssh:
ssh root@172.17.0.2
Great, it is working, now kill the container and delete it from our system:
docker kill test_sshd
docker rm test_sshd
Deploying the Image
At this point there should be no containers listed in
docker ps -a
and listed in our images there should be
alpine_sshd
. On my machine this looks like this:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine_sshd latest d7bc442572d1 2 hours ago 8.813 MB
If you have been playing you may have more.
Now, to deploy a container to a swarm all the nodes in your swarm must have the image. You can do this one of two ways, copy the Dockerfile and build this image on all the machines that you want to host it... with our current setup that will be somewhat problematic because a ssh key is generated when the image is built and as we are going to load balance with swarm we will have a single virtual IP which will be routed to a specific node and so each time we connect to a different docker host machine the ssh key will be different and well you get the picture.
So rather than do that we will export our image:
docker save alpine_sshd:latest > alpine_sshd.tar
you may also use the IMAGE ID
(in this case d7bc442572d1
)
like this:
docker save d7bc442572d1 > alpine_sshd.tar
then copy it onto the other machines with scp or whatever... you could even do some clever ssh piping if you like. I will just load it from the command line:
docker load < alpine_sshd.tar
You may also use docker save -o alpine_sshd.tar
or
docker load -i alpine_sshd.tar
.
Ok, now we have the same image on all our client machines, they
are not tagged though if you list your images on the systems you
did docker load
on you will see that they look something like
this:
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> d7bc442572d1 2 hours ago 8.813 MB
Which is fine if we want to start a stand alone image but if we want to use it in a swarm we have a problem, repository and tag names seem to be used on the workers and as we have no matching tag names it will not work! No problem, we can add the repository and tag names easily like this:
docker tag d7bc442572d1 alpine_sshd:latest
Do this on all worker machines and if you imported the machine on the manager node, do it there too!
Creating an Overlay Network for our Service
Great! now we must create an overlay network, from a swarm manager do this:
docker network create --driver overlay --subnet 10.0.9.0/24 sshd_net
The --subnet 10.0.9.0/24
bit is optional and docker will create
a default network automatically for you. We will see this later.
Starting the Service
Ok, this is now the bit you have been waiting for... start the service on your swarm!
docker service create --name test_sshd --replicas 3 -p 4022:22 --network sshd_net alpine_sshd
The -p
switch is forwardedport:destinationport the destination
being the port EXPOSEd in the Dockerfile. The rest should make
sense.
See what is happening with docker service ls
output once your
service is running on three nodes should look like this:
ID NAME REPLICAS IMAGE COMMAND
1pdfk95mpgva test_sshd 3/3 alpine_sshd
On each node you can look at the processes by calling
docker ps -a
. If you have 4 nodes then one of them will not be
running the image.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2d060e8f7f9c alpine_sshd:latest "/usr/sbin/sshd -D" About a minute ago Up About a minute 22/tcp test_sshd.1.18rgbs3i5rz5lo21cqg41wvqp
You can now use port 4022 on any of the machines in your cluster (yes, including the one not running the container) and you will be connected to one of the running containers.
It is also possible to connect to port 22 on the virtual IP address provided by the overlay network from a machine that is a member of the swarm. On the manager find out the assigned virtual IP addresses:
docker service inspect test_sshd
And look for the bit that says Virtual IPs:
"VirtualIPs": [
{
"NetworkID": "921fneof3tvu9puv1yjgldvue",
"Addr": "10.255.0.7/16"
},
{
"NetworkID": "3texhe6dnxx9sh5u9unkfhjoy",
"Addr": "10.0.9.1/24"
}
]
These addresses can only be used when connected to a container that is also connected to the respective overlay network.
Connecting to Individual Instances
Each time you connect as above you will be using the load balance feature of docker swarm and so not necessarilly connected to the same instance each time. Also these services are ephemeral and so scaling down will cause data loss on the node(s) removed from the service.
To connect to a specific running container you can use the docker attach command, from woz I can see the following containers:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2d060e8f7f9c alpine_sshd:latest "/usr/sbin/sshd -D" 37 minutes ago Up 37 minutes 22/tcp test_sshd.1.18rgbs3i5rz5lo21cqg41wvqp
And I can attach to that specific one like this:
woz ~# docker exec -it test_sshd.1.18rgbs3i5rz5lo21cqg41wvqp /bin/ash
/ # hostname
2d060e8f7f9c
To leave Ctrl-D will do the trick or exit and will not kill the container. From inside the container there is access to the docker swarm overlay network and so you can access the other sshd boxes with their internal IP addresses.
You can also find out more information on the instances on the
current docker node with docker network inspect sshd_net
.
Scaling
Want more nodes? Simple:
docker service scale test_sshd=4
Remember that the image must be present and correctly tagged... or present in the docker hub I suppose.
Deleting the Service
Well, let's face it, this service is rather useless, lets get rid of it!
docker service rm test_sshd
Leaving the Swarm
To leave the swarm is a two step process, first on the node you want to remove run this:
docker swarm leave
Then if you look on a manager node you should see a list of nodes and the one you just ran the above command on should show as "Down". You may also stop the service or turn off the machine and wait for the manager to notice that the node is not responding.
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
08h5cqem6yh9j8e5ric4t469g klaagia Ready Active
2znzzta7afz7ijpxcsb0m74mk ruuma Ready Active
7p1asy8vx71sg5cq64rpgao1p * b3k Ready Active Leader
d8i29g5zy87x9rsn92h0038b7 woz Down Active
Now we need to remove the node on a manager node. This is simple too:
docker node rm woz
or
docker node rm d8i29g5zy87x9rsn92h0038b7
If you did not leave the swarm in an orderly fashion you will
have to do docker swarm leave
on the node before joining another
swarm or re-joining the swarm that the node was removed from.
For more information see:
Updating Local Images
:latest
? is it the latest?
If you choose to create a new container with the :latest
tag it
may not be the latest, it all depends when you first pulled the
image with the tag :latest
... that will be the base image for your
new container. To get the most recent latest from dockerhub (or
whatever repository you are using) run this command:
docker pull gitlab/gitlab-runner:latest
In the above example I am pulling the most recent gitlab-runner.
Conclusion
You should now have set up a rather useless sshd network allowing connection on port 4022 to one of however many replicas that are currently running.