Author Archives: sbrakl

Monitoring Azure VMs with Power BI

I had been into the situation where we were running some performance test on the Azure VM and need to monitor their CPU’s

Azure portal (Feb 2017) provides nice metric blade to monitor most of the resources in Azure like Web Apps, Storage Account, SQL Azure, etc. I need to monitor IaaS VM’s.

I was running Linux VM, and there is nice blog on the Azure docs, which show’s how to enable Linux diagnostic on the VM and collect the metric

This works for me, but had two issues. First, Azure metric blade just gives me three filter for time duration. It like, past hour, entire day and pass week. There wasn’t any provision for custom duration say, CPU for yesterday between 1 pm to 3 pm.


Second, I had create a custom dashboard on the Azure Portal for monitoring all my VM’s CPU. But, I can’t share this with non-Azure users. Our performance testers, Managers, etc. didn’t have the Azure subscriptions or any Azure knowledge. To create read-only users with particular dashboard resource group access and getting them the learning curve to view CPU was big ask.

I knew this monitoring reading from the Linux diagnostic accounts are save in Azure Storage Tables,


Linux diagnostic tables in Azure storage as seen in Visual Studio 2015 Cloud explorer

and I can use PowerBI to connect Azure Storage Tables. Power BI report can be publish to Web and can be viewed without need of Azure Subscription account. So, most of my needs were answered.

I create a PowerBI report where I connect to Azure tables. Here, before you click load, you need to filter out the records otherwise, it would download the entire table data to PowerBI which would be in GB’s

I edited to add filter for past week data and then load filter data in Power BI data model (Yes, Power BI stores it own data). Once data is loaded in the Power BI data model, I need to add few new calculated columns in the data model, which I will use this calculated columns to define my new Time Hierarchy.

By default, power BI provides Time hierarchy in Year, Quarter, Month and Date. But, for the data I want Month, Date, Hours and minute

I create my new time hierarchy and create the report out of it. I had create basic report using Column chart which show max CPU by minute, hour, day and month. i.e. Even for 5 min in a hour, if the CPU hit 100%, it would show in cumulative hour as 100%. Same logic will role up to day and month. It helps us in devops to get things in large perceptive of the VM usage.

I am aware, there are better tools in the market like Azure OMS, and from third parties like DataDog and SysDig. But, this is like DIY project rather than using tools.

Word of caution, when using PowerBI with Azure Table Storage, every time PowerBI hits Azure storage table to fetch data, there will be outguess charges on Azure Storage. You can use something call as PowerBI embedded and host Power BI report in same region where your storage account is to avoid these charges.

The whole action I have capture in this youtube video, which

  1. Connect to Azure Table
  2. Filter the Data from Azure Table
  3. Added columns in Data Model for new Time Hierarchy
  4. Create report with new data model
  5. Exploring drill down in power BI

If you have suggestion or comment, do let me know.

Stressor – The Container

I have been working on the docker auto-scaling stuff, and got the need of container which could stress the CPU


In Linux, there is stress utility which would stress the CPU, simple and sweet. Now, I need to put this stress utility on container and all get to go. Quick google got me few containers on docker hub, which were already build for this job, Great!

But, I had problems with these. First, they would start stressing the CPU as soon they start, this is not what I need in my scenario. Secondly,  I can’t fire command remotely, i.e. out container and docker host, I don’t have control on when to start, how much CPU to stress and how much time I should let them run

There is another utility, lookbusy, which let me control how much CPU percentage I want to stress. It was important for me, a utility which gives me control on the CPU percentage, say 70% load unlike stress utility, where I need to do trial and error to find what number would stress my CPU to 70%

Second, I had these container running behind a load balancer. I got an idea, where I could simply develop python Flask web app. This would serve Web UI where I could mention the percentage of CPU and time to stress, and in the hood it would use lookbusy to stress the CPU. This away, even without accessing the docker host, I can stress host CPU remotely from a browser.

I created a flask app which would stress my CPU and containerize it. I named it stressor. You can get the flask code from Github.

You can start the container with following command

docker run -p 80:5000 -name stressor sbrakl/stressor

Here, if port 80 is used by other application, you chose whatever available on your machine, say 5000.

This container run flask app on port 5000.

Now, this come in three flavours



Flask stress app running on Werkzeug web server. It good for light weight concurrent loads, but bad for 5+ concurrent load


Same as flaskappwithWerkzeug, but configure to run on SSL. It useful in scenarios, where you need to configure containers behind load balancer. This will test load balancer for SSL traffic.


Flask app is configure to run on the uwsgi and nginx webserver. It configure to run 16 concurrent request.

You can find more information about configuration on Github repository

Container scaling via python script

If you need to monitor docker containers, there are plenty of tools available in market. Some are paid, some are free but you need to do your homework to find which suits you the best.

If you are interest in tools, let me name the few cAdvisor, influx DB,, Sematext, Universal Control Plane. This is not the definitive list, but go to start with.

For me, it was more of Do-It-Yourself project, where I just needed simple script to monitor CPU and do some scale action base on monitor CPU.

Docker provides an command stats, which provides the stats of container running

docker stats display a live stream of the following container(s) resource usage statistics:

  • CPU % usage
  • Memory usage, limit, % usage
  • Network i/o
  • Disk i/o

The stats are updated every second. Here is a sample output:

CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O               BLOCK I/O
4827f0139b1f        10.94%              706.2 MB / 1.045 GB   67.61%              299.7 kB / 2.473 MB   456 MB / 327.3 MB

I was planing to exploit more this command for CPU monitoring.

Docker engine provides Docker Remote API, which is REST API and can be use communicate to the engine. Being REST API, I can call this in any of my favorite language I want.

Since, I was more in for the scripting, python become the prefer choice of language.

I began my search with python libraries which can connect to Docker. This could be very frustrating. When you search Google, various results show up which refer to different version of docker client, but none EXPLICITLY mentions it. It took couple of days for me to figure it out.

Let me point out few examples

There is tutorial at which says

pip install docker-py

>>> from docker import client
>>> cli = Client(base_url='unix://var/run/docker.sock')

There is official docker client from docker, which says as follow

pip install docker

>>> import docker 
>>> client = docker.DockerClient(base_url='unix://var/run/docker.sock')

Now,  here you see, there are two different API to instantiate client.

Advice would be to read the documentation from Github site. They would be the latest

Coming back to problem, get container stats using Docker python client. I have written my pet shop code to get the docker container CPU using Python client. In the remainder of this article, I would be referencing my script code which you can get from

When I was developing this script, I was writing against stand alone docker engine v 1.11, but I intend to run against docker engine 1.11 with swarm 1.25 on TLS.  I had write a method in the – GetDockerClient, where I can connect to local as well as swarm instance by passing the environment parameter. It interesting to know, how to connect to remote TLS enable docker host by passing client certificates.

If you use docker-compose, there would be problem to get the container name. Compose formats container name with the prefix folder name where compose file resides and suffix of the count of containers. i.e. container name like ‘coolweb’ will translate to ‘foldername_coolweb_1′. contains the getContainerInComposeMode method, which get container in by formatting container name with compose pattern.

I know, you would be thinking, code isn’t in best form, but it more of jugglery to get it done rather than making it master piece for the world to see.

Moving forward, of getting the docker stat. It come with another surprise, docker python API doesn’t have stats() method on client object. In fact it has stats() method on the container object. So, basically it means, you can’t get stats like you get with docker stats command which gives stats for all the container running on the docker host. Bad! Even people over the internet express their frustration about docker-py  like in this blog.

Holding our focus, moving back to code to get stats about the container. In the, get_CPU_Percentage method, you will get code get container stats

# 'con' is container which you need to monitor
constat = con.stats(stream=False)

stream=false mean, you will get stats for just one time, and not a continues stream object.

It will give back json object something like this below. It large object, but I have just highlighted the CPU related stuff

    "read": "2016-02-07T13:26:56.142981314Z",
    "precpu_stats": {
        "cpu_usage": {
            "total_usage": 0,
            "percpu_usage": null,
            "usage_in_kernelmode": 0,
            "usage_in_usermode": 0
        "system_cpu_usage": 0,
        "throttling_data": {
            "periods": 0,
            "throttled_periods": 0,
            "throttled_time": 0
    "cpu_stats": {
        "cpu_usage": {
            "total_usage": 242581854769,
            "percpu_usage": [242581854769],
            "usage_in_kernelmode": 33910000000,
            "usage_in_usermode": 123040000000
        "system_cpu_usage": 3367860000000,
        "throttling_data": {
            "periods": 0,
            "throttled_periods": 0,
            "throttled_time": 0
    "memory_stats": {
        "failcnt": 0,
        "limit": 1044574208

precpu_stats are CPU stats before point of reference, say 10 sec. cpu_stats are stats at point in time. If you look more into get_CPU_Percentage method, it juggles from the JSON object, get relevant variables and computes the percentage CPU for the container.

Once I get the CPU in percentage, I have put it in array at the interval of 2 sec. It a fix width array with 5 slots. So, it hold only last 5 reading, i.e. last 10 secs reading in term of time

Then I compute the mean of of array to get mean CPU, so it rules out uneven CPU spikes. I take this mean CPU against CPU threadhold, i.e. 50%. If mean CPU is more than 50%, it will trigger scale out action. If it’s less than 50, it will trigger scale down action

The entire logic to scaling up and down with cooling time in between is in ScaleContaienr method of

These all methods are called from, which runs the code in loop.

That’s it. It brings me to the end of Do-It-Yourself project of docker scaling. I know, it’s not the ultimate script when it come to container scaling.

Client Auth with Interlock and Nginx

I had the requirement of setting Interlock + Nginx where backend expects client authentication. If you have directly landed up here, to get the context about service discovery and interlock and Nginx, read my previous post.

Note: This topic applies to Interlock 1.3 and Docker 1.11. If you are using docker > 1.12, I recommended to use inbuilt docker load balancer, which ships with swarmkit.

Problem Definition:

Setup client authentication certificates with Interlock + Nginx

Why it a problem: 

Interlock controls the Nginx configuration file. You can’t directly modify the Nginx configuration file, as Interlock would be overwrite when a container starts or dies.

Interlock allows certain data labels which allows you to configure the Nginx configuration file. Read Interlock data label section of previous post for more info.

There are data label to set SSL certificate, set SSL only, set SSL backend, etc. But, there aren’t any labels to set SSL proxy certificate. I had eve raise an issue, to found it not supported.

No data label to configure client authentication certificates is the problem

Possible Solution

If you need to set client authentication certificates with Nginx, serverfault threads hints how to do

backend {
 server some-ip:443;

server {
 listen 80;

   location / {
      proxy_ssl_certificate certs/client.crt;
      proxy_ssl_certificate_key certs/client.key;

      proxy_pass https://backend;

Now, I need to find a way with Interlock, where I could get control of template it uses for configuring the interlock

Hint’s from the interlock docs, where it shows configuration variable TemplatePath  in the toml configuration. It allows us to give the template, which it would use with variable substitution to create final Nginx config.

Again, I can get the example of this template file in interlock docs.

I found this template file a perfect opportunity to modify the template to include client auth certificates in template and use.

 location / {
 # Added by Shabbir 9th Dec 2016, For Client Authentication
 proxy_ssl_certificate /certs/client.crt;
 proxy_ssl_certificate_key /certs/client.key;
 proxy_ssl_password_file /certs/pass.txt;
 # Change End
 {{ if $host.SSLBackend }}proxy_pass https://{{ $host.Upstream.Name }};{{ else }}proxy_pass http://{{ $host.Upstream.Name }};{{ end }}

This certificate needs to be present on the machine where Nginx container would be launch, and they are added to container via volume mounts.

Here, the extract of docker-compose file, which configures nginx container

   image: nginx:latest
   entrypoint: nginx
     - common     
     - 8443:8443
     - 8009
     - interlock
   command: -g "daemon off;" -c /etc/nginx/nginx.conf
   restart: unless-stopped
   labels: nginx
       - "constraint:com.function==interlock"
       - ~/myapp/certs:/certs
       - ~/myapp/logs:/var/log/nginx

This is how I solve the issue of client authentication, but this technique could be use to configure interlock for all the unsupported Nginx scenarios like tcp pass through, etc




Service discovery sauce with Interlock and Nginx

Poster Information: If you need to know more about the Service discovery, what options are available and how things change in docker 1.12, check out my service discovery post.

This article applies to docker standalone swarm compare to docker 1.12 swarmkit, where docker engine has integrated swarm mode.

Note: This point forward, in this article, I am referring to swarm, I am referring to standalone swarm and not swarmkit one.

I started the work in Docker 1.11 era, when the swarm was a separate tool from the docker engine, and you need to launch a couple of swarm containers to setup the swarm cluster.

IMHO, interlock + Nginx is poor man’s tools in terms of service discovery. There would be better options available, but for me, it all started with taking a look at swarm at scale example at the docker site. They have shown how to use interlock with Nginx for load balancing and service discovery. Not knowing much on service discovery, and having working example demonstrate was good enough for me to engage with interlock.

Interlock is swarm event listener tool, which listens to the swarm events and performs a respective action on the extension. As of current (Dec 2016), it supports only two extensions, Nginx and HAProxy. Both acts as service discovery and load balancer. There are another extension planned called beacon, which would be used for monitoring and autoscaling perhaps, but now seem to be abandon, thanks to docker 1.12 swarmkit

In simple terms, there are three actors in the play. Swarm manager, Interlock and Nginx acting as a load balancer. Best part, all three runs as the docker container. It means no installation/configuration at host VM and easy to set them up.

Interlock listens to swarm master for start or stop container events. When it hear something about it, it updates the Nginx config. Below animated diagrams explains it better


Interlock play in action

Now, we know “what” part of the Interlock, let move towards the “how” part. Unfortunately, there aren’t much documentation on the interlock available on the net. Interlock QuickStart guide provides some clue, but it missing the enterprise part. It doesn’t guide much if you are using docker-compose with the multi-host environment.

For later part, you can draw some inspiration from the Docker at scale, and there is obsolete lab article which shows how to use interlock with docker-compose, but the interlock commands are obsolete for 1.13 (was latest in Oct 2016) version.

I am not planning to write entire to-do article for interlock, but intent to hints some useful when running is multi-host docker cluster with interlock.

The first part is docker swarm cluster, articles like Docker at scale and codeship used docker-machine to create an entire cluster on your desktop/laptop.  I have been more privilege to use R&D cloud account and use Azure Cloud to create my Docker cluster. You use tools like UCP, docker machine, ACS, docker cloud and many other are there in the market or just create cluster manually handheld. It doesn’t matter, where you run your cluster and how did you create it, as long you have working swarm cluster, you are good to play with Interlock and Nginx

Another piece of advice, while setting up swarm cluster, it not mandatory, when good practice to have a dedicated host for interlock and Nginx container. You can see swarm at scale article, where they use docker engine labels to tag particular host for tagging.

If you are docker-machine, you can give docker engine labels similar to below

Docker engine labels.png

And in the docker-compose.yml file, you would specify the contrains that container would be load at com.function=interlock


Interlock in docker-compose.yml file

Now, in order to prepare interlock sauce, you need following ingredients to set it right

  1. Interlock Config (config.toml)
  2. Interlock data labels
  3. (Optional) SSL certificates

Interlock Configuration File

Interlock uses a configuration store to configure options and extensions. There are three places where this configuration can be saved

1) File,  2) Environment variable or 3) Key value store

I find it convenient to store it in a file. This file by Interlock convention is named as config.toml.

Content: This file contains key-value options which are use by interlock and it’s extension. For more information, you can see

Location: If you are running Swarm on multi-host, this file needs to be present  on a VM which will host interlock container. You can then mount this file to a container by volume mapping. See docker-compose file above for more info

TLS Setting: If you are running Swarm on TLS, you need to set TLSCACert , TLSCert, TLSKey variable in toml file. For info, read setting Docker on TLS and Swarm on TLS.

TLSCACert = “/certs/ca.pem”
TLSCert = “/certs/cert.pem”
TLSKey = “/certs/key.pem”

Plus, this certificate needs to be present on a VM which will host interlock container. You can then mount this certificates via volume mount in the compose file. See docker-compose file above for example

PollInterval: If your interlock is not able to connect to Docker swarm, try setting the PollInterval to 3 seconds. In some environments, the event stream can be interrupted and hence Interlock need to rely on pooling mechanism

Interlock Data Label

Now, we have just setup Interlock with Nginx. If you have carefully observe the config.toml file, nowhere we have given which container we need to load balance. Then, how would interlock get this information from?

This brings us to the Interlock Data Labels. It the set of labels you pass to the container, which when Interlock inspect, would know, which containers it needs to load balance.

Here below example shows how to pass Interlock label along with other container labels.


Example of Interlock Data labels in docker-compose.yml

You can get more information about the data labels at

There is another example from Interlock repo itself, where it how to launch interlock with Nginx as load balancer in docker swarm via compose.

(Optional) SSL certificates

As seen the above Interlock label, there are lot of interlock variable related to SSL.

To understand better, we will enumerate to different combinations with SSL, we can setup load balancer


You can have flow something like this



Here, we are not using SSL at all.

II) SSL Termination at the Load Balancer 

Or you if you planning to use Nginx as the frontend internet facing load balancer, you should do something like this

HTTPS Termination.png

SSL Termination at the load balancer

III) Both lags with SSL

In my case, there was compliance requirement where all the traffic, internal or external needs to be SSL/TLS. So, I need to do something like this

HTTPS Only.png

HTTPS only traffic

For Case II and III, you need to set interlock SSL related data label. Let me give quick explanation of important ones

interlock.ssl_only : It you want your load balancer to list to HTTPS traffic only, set this to true. If false, the interlock configure Nginx to listen to both, HTTP and HTTPS. If true, then it set redirection rule in HTTP to redirect it to HTTPS

interlock.ssl_cert: This needs to be the X509 certificate path which load balancer will use to server frondend traffic. This certificate Common Name equal to load balancer name.  Plus, in multi-host environment, this certificates needs to be present on the machine which launch the Nginx container. You can then mount this file to a container by volume mapping. See docker-compose file above for more info

interlock.ssl_cert_key: Private key from X509 certificate. Same goes with key, it needs to be on the VM which will run Nginx container.

If your backend requires certificate client authentication, as it was in my case, then interlock has no support for it. But, there is a hack to SSL proxy the certificates. But, that for the another post.

Hope, information I share with you was useful. If you want any help, do write in comments below

What the heck is Service Discovery?

If you are working with container technologies like Docker Swarm, kubernetes or Mesos,  sooner or later you will stumble upon service discovery.

What the heck is service discovery? In layman’s term, it the way one container to know where another container is. Better explain with an example, where Web container needs to connect to DB container, it needs to know the address of DB container.

In container world, containers are like scurry of squirrels. They keep of jumping from one host (may call it VM) to other, moving one cluster to other for the cause of high availability and scalability.

In the tantrum, how does one container connect with other? Here comes the part of Service Discovery, a thing which can tell the present address of container/s. In layman’s term, imagine it to be like a librarian, who tell this book (container) is with Joe, Brandan or anybody else. Service Discovery is like a directory of addresses (Hostname, IP, Port) on all the containers running in a cluster.

If a new container is born or died, this Service Discovery updates its directory to make new entry or delete its entry.

Note, I have explain Service Discovery in it’s core function, but discovery (directory) is not the only function it performs, tools which implement service discocery provides additional functions like load balancing, health check, NATing, bundling, etc. But, if the tools provide all the additional function but not discovery, then it’s should be called Service discovery.

Now, knowing “what” is Service Discovery, come the next part “how”. How do we achieve Service discovery in a docker cluster? There is no single answer to this, answer is “it depends”.

It depends on what stack are you using for docker clustering, is it Mesos, kubernates, Nomad, Docker Swarm.

A stack come with related set of tools for scheduling, storage, scaling, batch execution, monitoring, etc. Stack is collective term of tools. Some stack has all, some stack has few. Service discovery would be part of the sets of tools your choosen stack provide.

Mesos, kubernates has DNS based service discovery. Nomad use consul (service registry). Docker swarm has two stories. If you are using docker prior to 1.12 with swarm, you would use consul, etcd or pick one from large open source community for service discovery. If you are using Docker 1.12 and all next versions, it comes with integrated swarm discovery with based on DNS server embedded in the swarm.

If you are not sure, what I am taking about all the these stack, I would suggest, take a step back,  make google your best friend and try to research what these stack means, what sets of tools and capability they offer, and how they differ for each other. Some are simple and easy to learn, other has many features but also steep learning curve.

If you are new and incline to docker swarm stack, eye close, go with latest docker with integrated swarm. While I was writing post in Dec 2016, Docker 1.3 was in beta, when you would be reading, do research and find out what latest version docker it would be running.


When I was working, that was era of Docker 1.11 and I did service discovery using poor man’s tools, Interlock with Ngnix. Why? I find it to be easiest to work with it when you working with Swarm, Consol plus it provides some cool features like load balacing, health checks and ssl offloading (which are nothing, but feature of load balancer).

I would be writting another post of sharing my experience with Interlock and Nginx.



Marathon LB service discovery

Solving mystery of the service discovery with Azure ACS DCOS – Part 2

Warning, it Level 300 deep dive topic, novice won’t able to get it. It meant to help wandering souls like me in scarcity of document to explain service discovery with Apache Mesos

In the last post, I have written about the service discovery option with Mesos. I have blabbered about the Mesos DNS. In this post, I would  solve the mystery of how service discovery happens in the sample app deployed in Azure ACS load balance tutorial

Mystery solving part

This post is about solving mystery of sample load balancing app launch in the Azure ACS load balance tutorial

Now here, following the tutorial, we have launch App via marathon. Note, this is import aspect about DCOS cluster. You can launch docker using Marathon or Aurora.
Web App Configuration
  “id”: “web”,
  “container”: {
    “type”: “DOCKER”,
    “docker”: {
      “image”: “yeasy/simple-web”,
      “network”: “BRIDGE”,
      “portMappings”: [
        { “hostPort”: 0, “containerPort”: 80, “servicePort”: 10000 }
  “instances”: 3,
  “cpus”: 0.1,
  “mem”: 65,
  “healthChecks”: [{
      “protocol”: “HTTP”,
      “path”: “/”,
      “portIndex”: 0,
      “timeoutSeconds”: 10,
      “gracePeriodSeconds”: 10,
      “intervalSeconds”: 2,
      “maxConsecutiveFailures”: 10
Here, observe serviceport: 10000. The servicePort is the port that exposes this service on marathon-lb. By default, port 10000 through to 10100 are reserved for marathon-lb services, so you should begin numbering your service ports from 10000 (that’s 101 service ports, if you’re counting). You may want to use a spreadsheet to keep track of which ports have been allocated to which services for each set of LBs.
Now, here I have hostPort is 0, that’s means that Marathon will arbitrarily allocate an available port on that host. This is import aspect of docker hosting. If, I hard code say port 80 in the configuration, ability to scale container is limited.
Take an example, I have 2 agents, and I want to launch 3 containers of the web. Then, the first container will go to agent1, and port 80 on agent1 would be mapped to port 80 on the container. The second container will go to agent2 as agent1 port 80 is taken. The third container would fail to start because there aren’t any port 80 available on both the agent.
Carrying forward the same example, with host port 0, marathon will dynamic port would be assign on hosting say 5252. The second container could have 5253 and third could have 5254 based on the availability of port on that host.
But, next problem how is how will other container call this 3 containers?
There some marathon-lb service, which acts as the load balancer and load-balances web requests to this containers.
What would be load balancer address which you would use to load balances this request?
In order to answer this, we need to understand how Mesos DNS space works.
When this application is launch, it would have DNS of <task>.<service>.mesos i.e. in our case, [web] app which we launch using the json translate to web.marathon.mesos.
If this would be single instance of [web] app with hard coded port 80, then Mesos DNS would register web.marathon.mesos to IP of the AgentVM where it been deployed and access http://web.marathon.mesos/ on master VM would land the UI of the web application.
But, now we have three instances on the app sitting on different agent VM listing to arbitrary port. Here where Marathon load balancer comes in picture. Service Port declare above app configuration is used by Marathon-lb to provide the endpoint to listen to web service.
In load balance scenario, marathon-lb provides the load balance endpoint on <marathon-lb-name>.<framework-name>.mesos:<service port number>. In our case, this translate to marathonlb-default.marathon.mesos:10000 where 10000 is the service port configure on the marathon-lb
Complete communication from browser to the container.
  1. Browser hit the Azure Load balancer on port 80
  2. Azure load balancer forwards request to VMSS in public subnet
  3. In our case, public VMSS has just one VM running Marathon-LB
  4. Marathon-LB is base on HAProxy, which has configuration to listen on port 80 and forward to marathonlb-default.marathon.mesos:10000
  5. marathon-defaultlb is created on the service port definition, which again load balance request to child container running in the private subnet of the cluster
Below diagram tries to explain in the overview.
 Marathon LB service discovery
Now, here base on marathon app definition, it was listing to service port 10000.
If database server needs to hit rest endpoint on the web service, it needs to point to marathonlb-default.marathon.mesos:10000
Database server can register itself on marathon-lb with 10100 port
Marathon-LB endpoint for DB would be marathonlb-default.marathon.mesos:10100
Web can access just marathonlb-default-marathon.mesos:10100 and it will route traffic to one of the instances of container running in cluster
Checking the HAProxy stats
Before that, in ACS-Mesos
  1. Open the port 9090 in public Network Security Group
  2. Add port 9090 in load balancer rule
Now, access haproxy stats of
To access haproxy config in LUA language
There are more, you can reference in below link
This in nutshell represents the ACS-Mesos service discovery using Mesos