Category Archives: Uncategorized

Stressor – The Container

I have been working on the docker auto-scaling stuff, and got the need of container which could stress the CPU


In Linux, there is stress utility which would stress the CPU, simple and sweet. Now, I need to put this stress utility on container and all get to go. Quick google got me few containers on docker hub, which were already build for this job, Great!

But, I had problems with these. First, they would start stressing the CPU as soon they start, this is not what I need in my scenario. Secondly,  I can’t fire command remotely, i.e. out container and docker host, I don’t have control on when to start, how much CPU to stress and how much time I should let them run

There is another utility, lookbusy, which let me control how much CPU percentage I want to stress. It was important for me, a utility which gives me control on the CPU percentage, say 70% load unlike stress utility, where I need to do trial and error to find what number would stress my CPU to 70%

Second, I had these container running behind a load balancer. I got an idea, where I could simply develop python Flask web app. This would serve Web UI where I could mention the percentage of CPU and time to stress, and in the hood it would use lookbusy to stress the CPU. This away, even without accessing the docker host, I can stress host CPU remotely from a browser.

I created a flask app which would stress my CPU and containerize it. I named it stressor. You can get the flask code from Github.

You can start the container with following command

docker run -p 80:5000 -name stressor sbrakl/stressor

Here, if port 80 is used by other application, you chose whatever available on your machine, say 5000.

This container run flask app on port 5000.

Now, this come in three flavours



Flask stress app running on Werkzeug web server. It good for light weight concurrent loads, but bad for 5+ concurrent load


Same as flaskappwithWerkzeug, but configure to run on SSL. It useful in scenarios, where you need to configure containers behind load balancer. This will test load balancer for SSL traffic.


Flask app is configure to run on the uwsgi and nginx webserver. It configure to run 16 concurrent request.

You can find more information about configuration on Github repository

Container scaling via python script

If you need to monitor docker containers, there are plenty of tools available in market. Some are paid, some are free but you need to do your homework to find which suits you the best.

If you are interest in tools, let me name the few cAdvisor, influx DB,, Sematext, Universal Control Plane. This is not the definitive list, but go to start with.

For me, it was more of Do-It-Yourself project, where I just needed simple script to monitor CPU and do some scale action base on monitor CPU.

Docker provides an command stats, which provides the stats of container running

docker stats display a live stream of the following container(s) resource usage statistics:

  • CPU % usage
  • Memory usage, limit, % usage
  • Network i/o
  • Disk i/o

The stats are updated every second. Here is a sample output:

CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O               BLOCK I/O
4827f0139b1f        10.94%              706.2 MB / 1.045 GB   67.61%              299.7 kB / 2.473 MB   456 MB / 327.3 MB

I was planing to exploit more this command for CPU monitoring.

Docker engine provides Docker Remote API, which is REST API and can be use communicate to the engine. Being REST API, I can call this in any of my favorite language I want.

Since, I was more in for the scripting, python become the prefer choice of language.

I began my search with python libraries which can connect to Docker. This could be very frustrating. When you search Google, various results show up which refer to different version of docker client, but none EXPLICITLY mentions it. It took couple of days for me to figure it out.

Let me point out few examples

There is tutorial at which says

pip install docker-py

>>> from docker import client
>>> cli = Client(base_url='unix://var/run/docker.sock')

There is official docker client from docker, which says as follow

pip install docker

>>> import docker 
>>> client = docker.DockerClient(base_url='unix://var/run/docker.sock')

Now,  here you see, there are two different API to instantiate client.

Advice would be to read the documentation from Github site. They would be the latest

Coming back to problem, get container stats using Docker python client. I have written my pet shop code to get the docker container CPU using Python client. In the remainder of this article, I would be referencing my script code which you can get from

When I was developing this script, I was writing against stand alone docker engine v 1.11, but I intend to run against docker engine 1.11 with swarm 1.25 on TLS.  I had write a method in the – GetDockerClient, where I can connect to local as well as swarm instance by passing the environment parameter. It interesting to know, how to connect to remote TLS enable docker host by passing client certificates.

If you use docker-compose, there would be problem to get the container name. Compose formats container name with the prefix folder name where compose file resides and suffix of the count of containers. i.e. container name like ‘coolweb’ will translate to ‘foldername_coolweb_1′. contains the getContainerInComposeMode method, which get container in by formatting container name with compose pattern.

I know, you would be thinking, code isn’t in best form, but it more of jugglery to get it done rather than making it master piece for the world to see.

Moving forward, of getting the docker stat. It come with another surprise, docker python API doesn’t have stats() method on client object. In fact it has stats() method on the container object. So, basically it means, you can’t get stats like you get with docker stats command which gives stats for all the container running on the docker host. Bad! Even people over the internet express their frustration about docker-py  like in this blog.

Holding our focus, moving back to code to get stats about the container. In the, get_CPU_Percentage method, you will get code get container stats

# 'con' is container which you need to monitor
constat = con.stats(stream=False)

stream=false mean, you will get stats for just one time, and not a continues stream object.

It will give back json object something like this below. It large object, but I have just highlighted the CPU related stuff

    "read": "2016-02-07T13:26:56.142981314Z",
    "precpu_stats": {
        "cpu_usage": {
            "total_usage": 0,
            "percpu_usage": null,
            "usage_in_kernelmode": 0,
            "usage_in_usermode": 0
        "system_cpu_usage": 0,
        "throttling_data": {
            "periods": 0,
            "throttled_periods": 0,
            "throttled_time": 0
    "cpu_stats": {
        "cpu_usage": {
            "total_usage": 242581854769,
            "percpu_usage": [242581854769],
            "usage_in_kernelmode": 33910000000,
            "usage_in_usermode": 123040000000
        "system_cpu_usage": 3367860000000,
        "throttling_data": {
            "periods": 0,
            "throttled_periods": 0,
            "throttled_time": 0
    "memory_stats": {
        "failcnt": 0,
        "limit": 1044574208

precpu_stats are CPU stats before point of reference, say 10 sec. cpu_stats are stats at point in time. If you look more into get_CPU_Percentage method, it juggles from the JSON object, get relevant variables and computes the percentage CPU for the container.

Once I get the CPU in percentage, I have put it in array at the interval of 2 sec. It a fix width array with 5 slots. So, it hold only last 5 reading, i.e. last 10 secs reading in term of time

Then I compute the mean of of array to get mean CPU, so it rules out uneven CPU spikes. I take this mean CPU against CPU threadhold, i.e. 50%. If mean CPU is more than 50%, it will trigger scale out action. If it’s less than 50, it will trigger scale down action

The entire logic to scaling up and down with cooling time in between is in ScaleContaienr method of

These all methods are called from, which runs the code in loop.

That’s it. It brings me to the end of Do-It-Yourself project of docker scaling. I know, it’s not the ultimate script when it come to container scaling.

Client Auth with Interlock and Nginx

I had the requirement of setting Interlock + Nginx where backend expects client authentication. If you have directly landed up here, to get the context about service discovery and interlock and Nginx, read my previous post.

Note: This topic applies to Interlock 1.3 and Docker 1.11. If you are using docker > 1.12, I recommended to use inbuilt docker load balancer, which ships with swarmkit.

Problem Definition:

Setup client authentication certificates with Interlock + Nginx

Why it a problem: 

Interlock controls the Nginx configuration file. You can’t directly modify the Nginx configuration file, as Interlock would be overwrite when a container starts or dies.

Interlock allows certain data labels which allows you to configure the Nginx configuration file. Read Interlock data label section of previous post for more info.

There are data label to set SSL certificate, set SSL only, set SSL backend, etc. But, there aren’t any labels to set SSL proxy certificate. I had eve raise an issue, to found it not supported.

No data label to configure client authentication certificates is the problem

Possible Solution

If you need to set client authentication certificates with Nginx, serverfault threads hints how to do

backend {
 server some-ip:443;

server {
 listen 80;

   location / {
      proxy_ssl_certificate certs/client.crt;
      proxy_ssl_certificate_key certs/client.key;

      proxy_pass https://backend;

Now, I need to find a way with Interlock, where I could get control of template it uses for configuring the interlock

Hint’s from the interlock docs, where it shows configuration variable TemplatePath  in the toml configuration. It allows us to give the template, which it would use with variable substitution to create final Nginx config.

Again, I can get the example of this template file in interlock docs.

I found this template file a perfect opportunity to modify the template to include client auth certificates in template and use.

 location / {
 # Added by Shabbir 9th Dec 2016, For Client Authentication
 proxy_ssl_certificate /certs/client.crt;
 proxy_ssl_certificate_key /certs/client.key;
 proxy_ssl_password_file /certs/pass.txt;
 # Change End
 {{ if $host.SSLBackend }}proxy_pass https://{{ $host.Upstream.Name }};{{ else }}proxy_pass http://{{ $host.Upstream.Name }};{{ end }}

This certificate needs to be present on the machine where Nginx container would be launch, and they are added to container via volume mounts.

Here, the extract of docker-compose file, which configures nginx container

   image: nginx:latest
   entrypoint: nginx
     - common     
     - 8443:8443
     - 8009
     - interlock
   command: -g "daemon off;" -c /etc/nginx/nginx.conf
   restart: unless-stopped
   labels: nginx
       - "constraint:com.function==interlock"
       - ~/myapp/certs:/certs
       - ~/myapp/logs:/var/log/nginx

This is how I solve the issue of client authentication, but this technique could be use to configure interlock for all the unsupported Nginx scenarios like tcp pass through, etc




Test WCF Service from the MEX binding for NamedPipe Connection

Test WCF Service from the MEX binding for NamedPipe Connection

It when into the trouble when, there was requirement to change the WCF binding from TCP to NamedPipe for one of the project

The problem was, there were two WebSites, which were hosting the WCF Service. One service in each one of the website.

Now, as for the TCP Binding, website where separated by the port number. So, base address was something like



But, when doing for the NamedPipe, there aren’t port numbers in NamedPipe. So, it needed to have scheme like net.pipe://{someuniquename}/Service1.svc

But, the problem was, service was hosted in the WAS under IIS. I have no control on the addresses, as it would be dictated by the IIS

View my last post, which mention how to enter IIS binding Information to get unique hostname for namedpipe configuration

Now, I have added ABC and DEF in the binding information for each site. Now, my namedpipe endpoint address where



Now, I have my Server side, Host Endpoint as following

   1: <service name="ABC.Service.AuthenticationGateway.ContentAccessService"                     

   2:               behaviorConfiguration="ABC.ServiceHost.ServiceBehavior">                    

   3:        <host>                    

   4:          <baseAddresses>                    

   5:            <add baseAddress = "net.pipe://localhost/" />                    

   6:          </baseAddresses>                    

   7:        </host>                    

   8:        <endpoint address="ContectAccessService"


  10:                  binding="netNamedPipeBinding"

  11:                  bindingConfiguration="NetPipeBinding_IContectAccessService"

  12:                  contract="ABC.Service.AuthenticationGateway.IContectAccessService" />


  14:        <endpoint address="mex" binding="mexNamedPipeBinding" contract="IMetadataExchange" /> 

  15: </service>

Now, here relative address is ContectAccessService. So, my absolute url would be net.pipe://ABC/Service1.svc/ContectAccessSerice.

This is the browse the url for svc file in the url, you can see the wsdl for the service. Mine address was http://localhost:62/Service1.svc?wsdl. Mine, Website was hosted on 62 port.

Doing that, I need to test my Namedpipe service. I was using WCFTestClient, which is under C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE. Replace yellow part which wherever you have installed Visual Studio. 10.0 would be replaced by version for 2010/2012/2013. For VS 2010, it’s 10.0.

I was trying to address in pink in WcfTestClient.exe add service dialog

Doing so, I use to get the error

Error: Cannot obtain Metadata from net.pipe://ABC/Service1.svc/ContectAccessService If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at Exchange Error URI: net.pipe://ABC/Service1.svc/ContectAccessService Metadata contains a reference that cannot be resolved: ‘ net.pipe://ABC/Service1.svc/ContectAccessService ‘. <?xml version=”1.0″ encoding=”utf-16″?><Fault xmlns=””><Code><Value>Sender</Value><Subcode><Value xmlns:a=””>a:ActionNotSupported</Value></Subcode></Code><Reason><Text xml:lang=”en-US”>The message with Action ‘; cannot be processed at the receiver, due to a ContractFilter mismatch at the EndpointDispatcher. This may be because of either a contract mismatch (mismatched Actions between sender and receiver) or a binding/security mismatch between the sender and the receiver. Check that sender and receiver have the same contract and the same binding (including security requirements, e.g. Message, Transport, None).</Text></Reason></Fault>

I was baffled and perplex by the error. There is already, mex binding expose, then, why isn’t the ‘WCF client’ can’t get the metaexchange data?

It found, that error was silly from my side. The address which need to input in WCF Test Client was net.pipe://ABC/Service.svc     instead of


Cleared ITIL Examination of Monday

It great ­to clear the ITIL Foundation Exin-117 examination on Monday. It was 2 week journey for deciding to prepare for exam to exam itself.

For those of my friends, who are preparing for ITIL foundation, here are some tips you. I have not taken any classes for ITIL foundation. I relied on the material available online for clearing the exams.

First, there is great article on the web for ITIL foundation at which will resolve most of the quires.

I then enroll for free ITIL training, It gave me pdf and series of Video to watch. But, videos where boring and I couldn’t have any sense out of ITIL service management. Luckily, I got a torrent link which gave the “ITIL v3 – The Art of Service Online Learning Videos” and full course. This is wonderful course, which made sense to most of the things. But, warning here, these course in obsolete. It base on ITIL v3 2009 format. ITIL v3 was revised in 2011 and now, instead of v2 and v3, it just single version of ITIL in English @ September 2012 known as ITIL.

So, If you are really interested with the up to date version of ITIL 2011, buy it just USD 99, and is worth the very cent you spend on it. Here, my point of view, foundation exam will cost around USD 200, it worth to spend USD 100 for it to pass in first attempt.

In Mumbai, ITIL exam can be taken by either prometric, Pearson Vue or give exam from your own laptop with exin anywhere exam. For more information on Exin anywhere exam, refer

I had given to Prometric because it cheapest among all the options, US 99. I chose Prometric center at TALENT EDGE FROM KARROX, 7TH FLR, BHAVESHWAR ARCADE, L.B.S.ROAD, GHATKOPAR WEST, MUMBAI, 400086 India. As for the feedback on the center, it was great, they had my booking in advance. When, I went to center, they straight away put me to test, and once complete, they hand me the result. Tip here, as Prometric center require two ID of you which has photo, address and signature. I took mine Passport and driving license. In order to save time, I took photocopy (Xerox) of the same, so, when they ask for it, I just hand over to them, which save the time (15 mins) for taking them the photocopy, and when straight to exam terminal

One more point here, when you go to prometric site for booking ITIL exam, it will redirect you to This site will give you booking option for Exin Anywhere exam which will cost you USD 179 (price @ Sep 2012). But, click on start here, which will redirect to prometric exam site, which direct to choose ITIL exam through Prometric, which cost USD 99.



That’s all for the center, also, as for the study material, Art of service material was sufficient for preparation. If you want more information, there is official guide OGC ITIL v3 Service Lifecycle Introduction ITIL. It contains maps, flowcharts and diagrams, which give you good visualization about the ITIL process. Also, material mention on ITSkeptic site ( like ITIL process wiki and so on will help you understand you understand ITIL process responsibility and how they stich in one and other. Even, how ITIL phase like Service Strategy, Design, Transaction, Operation work together and how CSI play important role in each phase. I was surprise to find Service Level Management, which as per ITIL definition, belong to Service Design phase, plays important role in all phases. See following link for more information,

Another word of caution, many of the material you get on the net, would be of ITIL v3 2009 version, not of 2011 version. Even thought, 2011 version, is just update of v3 2009, and hence, many of concepts (90%) still hold true, and can pass exam really on that material. But, there are some pdfs available and even video by which give the difference between two

But, just preparation won’t pass exam, like very competitive exam, you need to give mock test to judge yourself and pass. I got the vce from exam It got from which contain latest 153 question and would definitely help you to pass exam if you give this exam couple of times.
Just piece of advice here, when giving mock exam, remember, you get the concept behind question rather than just remembering answer, as in real exam, wording may change, but, concept and context will remain same.

That’s all from my side for preparation of ITIL foundation exam, do leave comments if you like this article or have any queries and feedback for it.

Pass PMP exam on 13th August 2012

Hi All,

I cleared the PMP exam on 13th August 2012.

Here is my side of the story,

June End 2012,

I took the training from Knowledge woods, Mumbai at Andheri. When, I first register for the training, I thought, it would be just the formality to earn 35 PDU (Professional Development unit), which is needed for PMP. But, training was informative, and got the Head First Hardcopy as well as Soft Copy book, MindMaps (This are key to remember ITTO for 42 processes), and other notes.

June, 2012.

I first started with Head First and found this book does clear the concept, but not par with the questions which you get in PMP exam.

So, I got the Rita Mulcahy-PMP® Exam Prep 7th Edition, and start preparing from it. My study plan was 2 hours daily, and 4-6 hours on weekend for PMP. I am the project lead for Software Company. And believe me it requires real self-motivation and commitment to stick to the schedule plan. I live in Mumbai, and fortunately and unfortunately, I have hour long commute to office and back. I read these books on my tablet during travel time, and even snoop come time from office, to read 2-3 pages.

First, I read headfirst, which is completed in 30-40 mins, and then RITA, which takes around 2-3 hours for finishing one chapter. Then, I go back again for second read from RITA, to see and make notes of some points which I could miss in first Read. This takes another hour. So, basically, it should take 2-3 days to complete one chapter, if you take 2 hours daily.

Now, the most important part, I took the questions which are there at the end of chapte r, first from Head First (as they are easier) and then from RITA (tougher and closer to PMP exam question). Reading book will just make you learn 20% and appearing question, will make you learn rest 80%. Infact, appearing question will clear your concepts; book reading is just like storytelling, you don’t grasp much.

This goes on for still the end of July, and I manage to finish both Head First and RITA all 13 chapters.

July, 2012.

I start taking sample test from the Knowledge woods, but it didn’t help me much, because most of the question answers don’t have explanation, and this would help me, because I need explanation to understand, by Answer C is right choice. This is because, in PMP exam, question won’t have same word as question appeared in Mock, but would be on the similar line of context.

Then, I came across PMZilla site, where many people recommended PMStudy ( PMStudy gives one mock test free, and I appear it. It was the ground shaking moment for me, because many questions were from PMBOK study guide and I didn’t read at all.

Initially, I was looking for some free test options; I came to BrainBok ( which offer free 50 and 200 test questions on the similar lines of PMstudy. This made me to take, PMstudy PLUS pack.  Exam layout was very similar to PMStudy site; In addition, there was onscreen calculator button to do some basic calculation for PMP exam.

Here one for point, taking 200 question in one go takes 4 hours, but it very difficult to get 4 hours in one go in normal working lifestyle. So, I took in bit of 50-75 question and continue rest in next available time slot. Most sites allow you to break and continue format.

Once done that, I took PMP Exam Prep by Christopher Scordo, this book you can find of PMI Knowledge Centre -> eReads and Reference section.

More the test you take, more confident you are with clearing PMP exam. But caution here, take quality test, as there are thousands of PMP free test available of net, which are not authentic and could misguide your concepts. Same question would have different answer on different sites.

I found Rajesh Nair notes on PMzilla, which was superb and help me get one shot revision to most of the concepts in 9 knowledge area.

How do I know you are ready for exam, if I start scoring 70-80% percent in most, of the exams, I was sure, I am ready for the exam.

Remember, PMP require a lot of concepts to be remembered, revise daily to keep all those information in the head till time of exam.

August First two week

I had 3 week leaves pending (thanks to last year sat-sun working and busy schedule),  I took 8 days leave for PMP. I schedule exam on 13th August.

On Exam day, I didn’t study much, had good night sleep a day before, as to keep my brain free from anxiety and help it concentrate on 200 questions for next 4 hrs. After giving exam, prometric test machine screen went blank for 30-40 sec. This took my heart off, and then the feedback screen appeared. I was thinking in my mind, Dude, where my result. After feeling feedback form, again screen went blank for 20-30 secs, then appear most waited screen. You Passed, Phew!

I got 2 proficient and 2 Moderate proficient and 1 BP. More information about this grading,

refer link

Also, there is extract from PMZilla ,

I saw some more people asked this question about passing and proficiency level. I would answer here itself.

PMI does not reveal how much percentage and formulas it uses for grading the test. What I am saying here is what I have heard expert saying, whether you want to believe it or not, it’s up to you as even I do not know if it’s correct.

Out of the 200 questions, 25 questions are pre-test. As per the information I have these questions are removed from your test while calculating result.

When you say you need around 61% to pass means out of 200 questions you need to answer 122 questions correctly. In actual you are evaluated for 175 questions and hence the 122 correct questions percentage becomes 70%. So in reality you need 70% to pass. This is NOT followed now.

Proficiency in each process group is decided by:

<65% = BP

65% to 85% = MP

85 > = P

As of now, in order to pass the PMP exam, not more than one process group should be Below Proficient.

Also, somewhere on the website, I found, you can pass with 3 BP’s and 2 MP’s as pass score is calculated on the question mark weightage. Means, you answer right tough questions with higher marks, you can pass with even with low BP’s. This means, even, you score low, don’t loose heart, you could still pass the exam.

Lesson learn

During training, you should apply for the PMI membership around that time. It because it will give you access PMI knowledge eBook and references, where you will find PMBOK study guide and much more. But, RITA you need to purchase or borrow from friend, as I did. And HeadFirst, I got from Knowledgewoods training.

Once,  you get membership, you can apply for PMP exam.

PMP Audits

PMP conducts audits (more information at and Audit happens when you apply for Exam, not membership. I was fortunate, I was spare from it, otherwise, you will need couple of more days (take 8-12 days) for your application to be accepted and 60-70 USD for courier the documents to US.

Also, if you have motivational problem, schedule exam on the particular date (say 1 month or 2 in advanced) even when you are not ready for it. Fear of losing 405 US dollars (I don’t quote INR as exchange rate change all the time) would push you to prepare for exam. Also, if worst case, dues to some medical emergency or unavoidable office or family work, you can reschedule exam with 70 US dollars, but this re-scheduling cost will push you even further to prepare for PMP exam.