Poster Information: If you need to know more about the Service discovery, what options are available and how things change in docker 1.12, check out my service discovery post.
This article applies to docker standalone swarm compare to docker 1.12 swarmkit, where docker engine has integrated swarm mode.
Note: This point forward, in this article, I am referring to swarm, I am referring to standalone swarm and not swarmkit one.
I started the work in Docker 1.11 era, when the swarm was a separate tool from the docker engine, and you need to launch a couple of swarm containers to setup the swarm cluster.
IMHO, interlock + Nginx is poor man’s tools in terms of service discovery. There would be better options available, but for me, it all started with taking a look at swarm at scale example at the docker site. They have shown how to use interlock with Nginx for load balancing and service discovery. Not knowing much on service discovery, and having working example demonstrate was good enough for me to engage with interlock.
Interlock is swarm event listener tool, which listens to the swarm events and performs a respective action on the extension. As of current (Dec 2016), it supports only two extensions, Nginx and HAProxy. Both acts as service discovery and load balancer. There are another extension planned called beacon, which would be used for monitoring and autoscaling perhaps, but now seem to be abandon, thanks to docker 1.12 swarmkit
In simple terms, there are three actors in the play. Swarm manager, Interlock and Nginx acting as a load balancer. Best part, all three runs as the docker container. It means no installation/configuration at host VM and easy to set them up.
Interlock listens to swarm master for start or stop container events. When it hear something about it, it updates the Nginx config. Below animated diagrams explains it better
Interlock play in action
Now, we know “what” part of the Interlock, let move towards the “how” part. Unfortunately, there aren’t much documentation on the interlock available on the net. Interlock QuickStart guide provides some clue, but it missing the enterprise part. It doesn’t guide much if you are using docker-compose with the multi-host environment.
For later part, you can draw some inspiration from the Docker at scale, and there is obsolete lab article which shows how to use interlock with docker-compose, but the interlock commands are obsolete for 1.13 (was latest in Oct 2016) version.
I am not planning to write entire to-do article for interlock, but intent to hints some useful when running is multi-host docker cluster with interlock.
The first part is docker swarm cluster, articles like Docker at scale and codeship used docker-machine to create an entire cluster on your desktop/laptop. I have been more privilege to use R&D cloud account and use Azure Cloud to create my Docker cluster. You use tools like UCP, docker machine, ACS, docker cloud and many other are there in the market or just create cluster manually handheld. It doesn’t matter, where you run your cluster and how did you create it, as long you have working swarm cluster, you are good to play with Interlock and Nginx
Another piece of advice, while setting up swarm cluster, it not mandatory, when good practice to have a dedicated host for interlock and Nginx container. You can see swarm at scale article, where they use docker engine labels to tag particular host for tagging.
If you are docker-machine, you can give docker engine labels similar to below
And in the docker-compose.yml file, you would specify the contrains that container would be load at com.function=interlock
Interlock in docker-compose.yml file
Now, in order to prepare interlock sauce, you need following ingredients to set it right
- Interlock Config (config.toml)
- Interlock data labels
- (Optional) SSL certificates
Interlock uses a configuration store to configure options and extensions. There are three places where this configuration can be saved
1) File, 2) Environment variable or 3) Key value store
I find it convenient to store it in a file. This file by Interlock convention is named as config.toml.
Content: This file contains key-value options which are use by interlock and it’s extension. For more information, you can see https://github.com/ehazlett/interlock/blob/master/docs/configuration.md
Location: If you are running Swarm on multi-host, this file needs to be present on a VM which will host interlock container. You can then mount this file to a container by volume mapping. See docker-compose file above for more info
TLS Setting: If you are running Swarm on TLS, you need to set TLSCACert , TLSCert, TLSKey variable in toml file. For info, read setting Docker on TLS and Swarm on TLS.
TLSCACert = “/certs/ca.pem”
TLSCert = “/certs/cert.pem”
TLSKey = “/certs/key.pem”
Plus, this certificate needs to be present on a VM which will host interlock container. You can then mount this certificates via volume mount in the compose file. See docker-compose file above for example
PollInterval: If your interlock is not able to connect to Docker swarm, try setting the PollInterval to 3 seconds. In some environments, the event stream can be interrupted and hence Interlock need to rely on pooling mechanism
Now, we have just setup Interlock with Nginx. If you have carefully observe the config.toml file, nowhere we have given which container we need to load balance. Then, how would interlock get this information from?
This brings us to the Interlock Data Labels. It the set of labels you pass to the container, which when Interlock inspect, would know, which containers it needs to load balance.
Here below example shows how to pass Interlock label along with other container labels.
Example of Interlock Data labels in docker-compose.yml
You can get more information about the data labels at https://github.com/ehazlett/interlock/blob/master/docs/interlock_data.md
There is another example from Interlock repo itself, where it how to launch interlock with Nginx as load balancer in docker swarm via compose.
As seen the above Interlock label, there are lot of interlock variable related to SSL.
To understand better, we will enumerate to different combinations with SSL, we can setup load balancer
I) NO SSL
You can have flow something like this
Here, we are not using SSL at all.
II) SSL Termination at the Load Balancer
Or you if you planning to use Nginx as the frontend internet facing load balancer, you should do something like this
III) Both lags with SSL
In my case, there was compliance requirement where all the traffic, internal or external needs to be SSL/TLS. So, I need to do something like this
For Case II and III, you need to set interlock SSL related data label. Let me give quick explanation of important ones
interlock.ssl_only : It you want your load balancer to list to HTTPS traffic only, set this to true. If false, the interlock configure Nginx to listen to both, HTTP and HTTPS. If true, then it set redirection rule in HTTP to redirect it to HTTPS
interlock.ssl_cert: This needs to be the X509 certificate path which load balancer will use to server frondend traffic. This certificate Common Name equal to load balancer name. Plus, in multi-host environment, this certificates needs to be present on the machine which launch the Nginx container. You can then mount this file to a container by volume mapping. See docker-compose file above for more info
interlock.ssl_cert_key: Private key from X509 certificate. Same goes with key, it needs to be on the VM which will run Nginx container.
If your backend requires certificate client authentication, as it was in my case, then interlock has no support for it. But, there is a hack to SSL proxy the certificates. But, that for the another post.
Hope, information I share with you was useful. If you want any help, do write in comments below