Author Archives: sbrakl

Globalization and localization in ASP.NET Core – Part 3

If you just landed on this page, this page is of series of Post for localization / globalization

Part 1 – Introduction

Part 2 – ASP.NET Core ways of Localization / Globalization

Part 3 – Old trustworthy ResourseManager

If you are following the series of post, in this post, I would talk about working with ResourceManager.

ResourceManager class is nothing new, It been there since the inception of .NET framework. It was the de facto mechanism for working with localize resource file before aspnet core introduce IStringLocalizer.

In this post, I want to demonstrate, ResourceManager still works in dotnet core, if you don’t want to go with fuzzy IStringLocalizer.

Let dive in to the code, in the sample application, you fill find two methods in HelloController. In Local method

 string rsFileName = "api1.Resources.rs1";
 ResourceManager rm = 
       new ResourceManager(rsFileName, Assembly.GetExecutingAssembly());
 string msg = rm.GetString("Hello");

Here, it creating new instance of the ResourceManager which take basename and assembly where these resource files are embedded.

Basename is the root name of the resource file without its extension but including any fully qualified namespace name. For example, the root name for the resource file named
MyApplication.MyResource.en-US.resources is MyApplication.MyResource.

Assembly needs to be the one where resource files are embedded. In our case it is same as the one where running code (HelloController) is present. I can get current assembly which contains the code by Assembly.GetExecutingAssembly() which returns api1.dll assembly

In our case, resources file which ResourceManager will read are name rs1.resx and placed in “Resources” folder. Hence, basename will be api1.Resources.rs1

ResourceManager.GetString will looks for current culture on running thread and respectively look for culture specific resource file to get the value of key “Hello”

DifferentCultureResourceFile

If culture specific resource is not found, say “de-de“, it will use default resource with any suffix for getting the value of the “key“. If “key” values isn’t found in the resource file, it will return value as “key” itself.

If you need to get resource value for any other culture than the current culture on the running tread, you can by specifying the culture in the GetString method.

string msg3 = rm.GetString("Hello", new System.Globalization.CultureInfo("fr-FR"));

As simple as that.

Now, level 2. I had requirement where code for reading resource was in another assembly/dll then the one where resource files are located.

If you look into the lib method, it calls Lib1.ResRdr library to get the resource file message. But, rs1.resx is in api1.dll, not in Lib1.Rdr.dll

 public string lib()
 {
   string msg = "";
   msg = Lib1.ResRdr.Messenger.GetHello(); 
   return msg;
 }

If you look into GetHello() method

public static string GetHello()
 {
   string msg = "";
 
   Assembly asm = Assembly.GetCallingAssembly();
   string rsFileName = asm.GetName().Name + ".Resources.rs1";
   ResourceManager rm = new ResourceManager(rsFileName, asm);
   var msg2 = rm.GetString("Hello");
   msg = msg2;
   return msg;
 }

It creates new ResourceName, pass the basename and assembly which contains the embedded resource. Since, it seperate dll, it get’s the calling assembly by Assembly.GetCallingAssembly() which would return api1.dll or api2.dll, whichever calls  Lib1.Rdr.dll. Note, here if you had use Assembly.GetExecutingAssembly(), you will get Lib1.Rdr.dll.

For the basename, since resources are embedded in the dll, assembly root namespace needs to be taken from the assembly name. i.e. api1, api2. So, basename would be api1.Resources.rs1 or api2.Resources.rs2

Rest would be taken care by the ResourceManager, it would read remote assembly embedded resources and it’s satellite assemblies to get the translated value.

That’s all for Globalization and localization in ASP.NET Core. Hope you find this series of post informative.

 

Advertisements

Globalization and localization in ASP.NET Core – Part 2

If you just landed on this page, this page is of series of Post for localization / globalization

Part 1 – Introduction

Part 2 – ASP.NET Core ways of Localization / Globalization

Part 3 – Old trustworthy ResourseManager

In this post, I would be hinting on the ASP.NET Core way of  Localization / Globalization. There is already a good post on this from the Microsoft official Docs for ASP.NET Core at https://docs.microsoft.com/en-us/aspnet/core/fundamentals/localization and I am not going to repeat what’s in that post but would be sharing few highlight’s of that article.

In order to work with IStringLocalizer, important, you need to add following in the Startup.cs

// Add Localization services to the system
services.AddLocalization(options => options.ResourcesPath = "Resources")

// Configure supported cultures and localization options
services.Configure<RequestLocalizationOptions>(options =>
{
       var supportedCultures = new[]
                           {
                              new CultureInfo("en-US"),
                              new CultureInfo("de-DE")
                          };

// State what the default culture for your application is. 
// This will be used if no specific culture can be determined for 
// a given request.

options.DefaultRequestCulture = new RequestCulture("en-US", "en-US");

// You must explicitly state which cultures your application supports.
// These are the cultures the app supports for formatting 
// numbers, dates, etc.

options.SupportedCultures = supportedCultures;

// These are the cultures the app supports for UI strings, 
// i.e. we have localized resources for.

options.SupportedUICultures = supportedCultures;
});

In Configure in Startup.cs:

// Configure localization.
var locOptions = app.ApplicationServices.GetService<IOptions<RequestLocalizationOptions>>();
app.UseRequestLocalization(locOptions.Value);

This code (UseRequestLocalization) would set current culture info and current ui culture on the request execution thread.

By default, It determines culture from QueryString, Cookie and Request Header. Read first post to get some insight.

In order to understand better, I have create a sample application and have put the it on the GitHub, https://github.com/sbrakl/aspnetcoreglobalization where you can get the execution in action.

In HelloController.cs, local method on api1, you could see this code

string cul = Thread.CurrentThread.CurrentCulture.Name;
string culUI = Thread.CurrentThread.CurrentUICulture.Name;

If you debug this application, if your startup.cs initialization code is correct, it would see the set the thread culture.

In the same method, it set the localize string from IStringLocalizer _localizer.

msg = _localizer["Hello"];

This _localizer in injected in the controller by ASP.NET Core dependency injection, and would read from HelloController resources

HelloControllerResources.png

Placement and name of this resources file are important, otherwise, it won’t be able to read it. Great extent of naming and placement is written at https://docs.microsoft.com/en-us/aspnet/core/fundamentals/localization#resource-file-naming. That’s all you need to work with IStringLocalizer.

If you are curious just as me, to find out how this IStringLocalizer works? Then, you need to checkout it official GitHub Repository, https://github.com/aspnet/Localization

Dotnet Core is opensource, and you can read it code. When Dependency Inject (DI) is called to inject IStringLocalizer, it calls the ResourceManagerStringLocalizerFactory, which in turn create an instance of ResourceManagerStringLocalizer.

When ResourceManagerStringLocalizerFactory creates the instance of ResourceManagerStringLocalizer, it add new instance of the ResourceMananger

protected virtual ResourceManagerStringLocalizer 
             CreateResourceManagerStringLocalizer(
                                                 Assembly assembly,
                                                 string baseName)
 {
    return new ResourceManagerStringLocalizer(
                        new ResourceManager(baseName, assembly),
        assembly,
        baseName,
        _resourceNamesCache,
        _loggerFactory.CreateLogger<ResourceManagerStringLocalizer>());
 }

ResourceManager then in turn work with the resource files to get the translated text for the key

msg = _localizer["Hello"];

Here, “Hello” is the key and whatever value in the resource file, it will get that. Resource + culture file to be specific.

IStringLocalizer –> DI –> ResourceManagerStringLocalizerFactory –> ResourceManagerStringLocalizer –> ResourceManger

That’s the flow, how IStringLocalizer translated the messages.

Few interesting points here,

Type and Resource File Naming

IStringLocalizer takes the type. i.e.

IStringLocalizer<HelloController> _localizer

HelloController is the type. You resource needs to have HelloController name without default language resource file. What does in mean?

In ResourceManager era, default language resource file which won’t have culture suffix, example, abc resource would be named

"abc.resx"
"abc.FR-fr.resx" //French
"abc.ES-es.resx" //Spanish

“abc.resx” is default, if for any other cultures is pass apart from “FR-fr”, “ES-es”, this file would be used.

But, when working with IStringLocalizer, your default language set in Startup.cs. (see startup.cs code at top) Hence, you resource file should be named

"HelloController.en-US.resx"
"HelloController.FR-fr.resx"
"HelloController.ES-es.resx"

 

Location of resource files

In the startup.cs code, you mention resource path, where to find all the resources. It doesn’t need to “Resources”, it could be anything. When assembly is build, resource file would go as emended resources into the dll. ASP.NET Core would build satellite assemblies for different culture which you have mention.

ResourceFileInBuildDirectory

By definition, satellite assemblies do not contain code, except for that which is auto generated. Therefore, they cannot be executed as they are not in the main assembly. However, note that satellite assemblies are associated with a main assembly that usually contains default neutral resources (for example, en-US). A big benefit is that you can add support for a new culture or replace/update satellite assemblies without recompiling or replacing the application’s main assembly.

Dotnet Peek view of assembly

dotnetpeekAssembly

ResourceManager would find the resources embedded in assembly / satellite assembly base on the resource path and type name.

ResourceNameInAssembly.png

Further

It all good, when you are going translation in the same assembly, where resource files are located. But, in my case, resource file where in one assembly, and code to read from resource file was in another assembly. This where ASP.NET Core IStringLocalizer faded and was limited to for me. I need to resorted to good old ResourceManager which I would describe in the other post.

 

 

 

 

 

 

 

Globalization and localization in ASP.NET Core – Part 1

Here, this post is about my experience in the globalization and localization in asp.net core application.

This is series of Post for localization / globalization

Part 1 – Introduction

Part 2 – ASP.NET Core ways of Localization / Globalization

Part 3 – Old trustworthy ResourseManager

First for the fresher folks, let me explain what’s globalization and localization

Say, you need an web application which caters to different language say English and French. Greeting like “Hello Friend” needs to be translated to “salut l’ami” in french. How do we do it, simple. Create file with Name Value, on for English and other for French.

TextTranslate-English.txt -> Greeting =  “Hello Friend”

TextTranslate-English.txt -> Greeting =  “salut l’ami”

In Code -> var message  = TextTranslate[“Greeting”]

Base on the language, it will pick from the respective file. This is the idea of Globalization and localization. Dotnet folks figure this out long before in .NET 1.1 framework and in same can be done with Resource files in .NET.

In Visual Studio, you can add resource file to your project.

This resource file contain the key and value. Say, here Key is “Hello” and Value is “Hello-en-us”

Similarly, you can add different culture resource file.

In dotnet, language has more than text, it’s how you write dates, format numbers, currency symbols, etc. So, they call collection of this as culture

e.g. US English = 12/25/2017, UK English = 25/12/2017

Culture are denote by two letter alphabets like en, or en-US, en-UK. That’s US English and UK English. es-ES is Spanish Spain and es-MX Spanish Mexico. If just es is mention, then it defaults to Spanish Spain, en defaults to en-US.

And the class which handle this is known as cultureinfo. The thread in runs your code has two properties CultureInfo and UICultureInfo

Culture is the .NET representation of the default user locale of the system. This controls default number and date formatting and the like.

UICulture refers to the default user interface language, a setting introduced in Windows 2000. This is primarily regarding the UI localization/translation part of your app.

You can get running thread Culture with following code

 CultureInfo cul = Thread.CurrentThread.CurrentCulture;
 CultureInfo culUI = Thread.CurrentThread.CurrentUICulture;

Now, In ASP.NET, say your server in running in UK and you got the request from France. How does ASP.NET application know, it have to serve french text?

Fortunately, in ASP.NET Core, there is middleware which does it for us. By default, it has three ways

  • QueryString
  • Cookie
  • AcceptLanguage Header

QueryString

http://localhost:5000/?culture=es-MX&ui-culture=es-MX

Cookie

You can read Cookie as

The cookie format is c=%LANGCODE%|uic=%LANGCODE%, where c is Culture and uic is UICulture, for example: c=en-UK|uic=en-US

 

You can write cookie with following code

HttpContext.Response.Cookies.Append(
 CookieRequestCultureProvider.DefaultCookieName,
 CookieRequestCultureProvider.MakeCookieValue(requestCulture),
 new CookieOptions { Expires = DateTimeOffset.UtcNow.AddYears(1) });

Headers

When browser makes the request to website, it will send accept-language header. As seen in Chrome debug window -> Network -> Request Headers

AcceptHeader.png

You can read more about it here, https://docs.microsoft.com/en-us/aspnet/core/fundamentals/localization#implement-a-strategy-to-select-the-languageculture-for-each-request

This middleware would set the thread.currentCulture and thread.currentUICulture using either QueryString, Cookie or Accept-Language Header

Once current culture is set, Date.ToString() will return date as per Current culture. In en-US it would be 12/25/2017 and en-UK, it would be 25/12/2017

Next post we will see, how to implement asp.net localization package and how to use it.

Publish dotnet core app using MSDeploy

Recently, I got a task to setup CI-CD pipeline for the dotnet core project. CI was easy, CD had challenges.

In CD, I need to deploy on Windows as well as Linux. In this post, I going to describe for the windows. Microsoft did good job in documenting the asp.net core and IIS integration. https://docs.microsoft.com/en-us/aspnet/core/publishing/iis

What it say is, first install IIS, install the .NET Core Windows Server Hosting bundle on the hosting system and then create asp.net core website. This all as part of one time installation on the target machine.

Next part would be deploying new versions of the same ASP.NET core web application again and again. Now, this part needs to be automated as continues deployment (CD) pipeline.

Challenges are

  1. You build the code on one machine and deploy on other
  2. In my case, due to IT policy, I wasn’t allow to create network share folders for deployment
  3. It needs to be secure, as only users who had permissions can only deploy

Now, how do I copy website code from one machine to other machine IIS without network share folder? As I was looking around for a solution, I found Microsoft has already solve this problem way back in 2010 with Web Deploy.

Microsoft was always has confusing name conversion, they market as Web Deploy, but internally called it as MS Deploy. This tool is unsung hero of deployment. When it comes to automated deployment, people talked of chef, puppet but MSDeploy isn’t acknowledged much.

Okay, what this tools does? You can publish website remotely without need to transfer file manually to that machine and setup the website way you want. That’s means, it can create the app pool, set the folder permissions, configure the port, configure the binding, etc. Not just transfer of code files.

Now, there are few good resources over the web, esp from the Vishal Joshi http://vishaljoshi.blogspot.in/search?q=msdeploy and SAYED IBRAHIM HASHIMI http://sedodream.com/SearchView.aspx?q=msdeploy whom has written extensively of on how to work with MS Deploy.

You need to install Web Deploy 3.6 remote agent service on the target machine. Can’t figure what this is, read through the following blog post to install MS Deploy http://chamindac.blogspot.in/2016/05/deploy-aspnet-core-10-to-remote-iis.html

Here, build server in black and target deployment server  in red color.

MSDeploy.png

On the build server, you need to run following command to publish the dotnet core website, dotnet publish -o {path to publish folder}

dotnet publish -o D:\publishcode

Here, I want to deploy website in “Default Website/WebApp” on the target server.

WebappInIIS.png

Use this command to deploy on the build machine. Here, username password would be local administrative user of the target machine.

servernameorip can be computer name like “WL-TPM-348” or could be IP on machine like “10.20.30.15”

By default, MS Deploy is installed in C:\Program Files (x86)\IIS\Microsoft Web Deploy V3 folder. If it’s install elsewhere, but that path for msdeploy.exe.

D:>"C:\Program Files (x86)\IIS\Microsoft Web Deploy V3\msdeploy.exe" -verb:sync -source:iisApp="D:\publishcode" -dest:iisApp="Default Web Site/WebApp",ComputerName="servernameorip",UserName='ServerUser',Password='password',AuthType='NTLM' -allowUntrusted

Mind, this is bare metal command to deploy code. It doesn’t have SiteManifest.xml which configures IIS Web application for app pool, ports, binding, etc. Neither it’s paramterize. But, it good example to get started.

Hope, you find this post useful

 

 

 

 

 

Monitoring Azure VMs with Power BI

I had been into the situation where we were running some performance test on the Azure VM and need to monitor their CPU’s

Azure portal (Feb 2017) provides nice metric blade to monitor most of the resources in Azure like Web Apps, Storage Account, SQL Azure, etc. I need to monitor IaaS VM’s.

I was running Linux VM, and there is nice blog on the Azure docs, which show’s how to enable Linux diagnostic on the VM and collect the metric

This works for me, but had two issues. First, Azure metric blade just gives me three filter for time duration. It like, past hour, entire day and pass week. There wasn’t any provision for custom duration say, CPU for yesterday between 1 pm to 3 pm.

AzureDurationLimitation.png

Second, I had create a custom dashboard on the Azure Portal for monitoring all my VM’s CPU. But, I can’t share this with non-Azure users. Our performance testers, Managers, etc. didn’t have the Azure subscriptions or any Azure knowledge. To create read-only users with particular dashboard resource group access and getting them the learning curve to view CPU was big ask.

I knew this monitoring reading from the Linux diagnostic accounts are save in Azure Storage Tables,

AzureStorageTables.png

Linux diagnostic tables in Azure storage as seen in Visual Studio 2015 Cloud explorer

and I can use PowerBI to connect Azure Storage Tables. Power BI report can be publish to Web and can be viewed without need of Azure Subscription account. So, most of my needs were answered.

I create a PowerBI report where I connect to Azure tables. Here, before you click load, you need to filter out the records otherwise, it would download the entire table data to PowerBI which would be in GB’s

I edited to add filter for past week data and then load filter data in Power BI data model (Yes, Power BI stores it own data). Once data is loaded in the Power BI data model, I need to add few new calculated columns in the data model, which I will use this calculated columns to define my new Time Hierarchy.

By default, power BI provides Time hierarchy in Year, Quarter, Month and Date. But, for the data I want Month, Date, Hours and minute

I create my new time hierarchy and create the report out of it. I had create basic report using Column chart which show max CPU by minute, hour, day and month. i.e. Even for 5 min in a hour, if the CPU hit 100%, it would show in cumulative hour as 100%. Same logic will role up to day and month. It helps us in devops to get things in large perceptive of the VM usage.

I am aware, there are better tools in the market like Azure OMS, and from third parties like DataDog and SysDig. But, this is like DIY project rather than using tools.

Word of caution, when using PowerBI with Azure Table Storage, every time PowerBI hits Azure storage table to fetch data, there will be outguess charges on Azure Storage. You can use something call as PowerBI embedded and host Power BI report in same region where your storage account is to avoid these charges.

The whole action I have capture in this youtube video, which

  1. Connect to Azure Table
  2. Filter the Data from Azure Table
  3. Added columns in Data Model for new Time Hierarchy
  4. Create report with new data model
  5. Exploring drill down in power BI

If you have suggestion or comment, do let me know.

Stressor – The Container

I have been working on the docker auto-scaling stuff, and got the need of container which could stress the CPU

stress

In Linux, there is stress utility which would stress the CPU, simple and sweet. Now, I need to put this stress utility on container and all get to go. Quick google got me few containers on docker hub, which were already build for this job, Great!

But, I had problems with these. First, they would start stressing the CPU as soon they start, this is not what I need in my scenario. Secondly,  I can’t fire command remotely, i.e. out container and docker host, I don’t have control on when to start, how much CPU to stress and how much time I should let them run

There is another utility, lookbusy, which let me control how much CPU percentage I want to stress. It was important for me, a utility which gives me control on the CPU percentage, say 70% load unlike stress utility, where I need to do trial and error to find what number would stress my CPU to 70%

Second, I had these container running behind a load balancer. I got an idea, where I could simply develop python Flask web app. This would serve Web UI where I could mention the percentage of CPU and time to stress, and in the hood it would use lookbusy to stress the CPU. This away, even without accessing the docker host, I can stress host CPU remotely from a browser.

I created a flask app which would stress my CPU and containerize it. I named it stressor. You can get the flask code from Github.

You can start the container with following command

docker run -p 80:5000 -name stressor sbrakl/stressor

Here, if port 80 is used by other application, you chose whatever available on your machine, say 5000.

This container run flask app on port 5000.

Now, this come in three flavours

flavours-logo

flaskappwithWerkzeug

Flask stress app running on Werkzeug web server. It good for light weight concurrent loads, but bad for 5+ concurrent load

flaskappwithSSL

Same as flaskappwithWerkzeug, but configure to run on SSL. It useful in scenarios, where you need to configure containers behind load balancer. This will test load balancer for SSL traffic.

flaskappwithuwsgi

Flask app is configure to run on the uwsgi and nginx webserver. It configure to run 16 concurrent request.

You can find more information about configuration on Github repository

Container scaling via python script

If you need to monitor docker containers, there are plenty of tools available in market. Some are paid, some are free but you need to do your homework to find which suits you the best.

If you are interest in tools, let me name the few cAdvisor, influx DB, Prometheus.io, Sematext, Universal Control Plane. This is not the definitive list, but go to start with.

For me, it was more of Do-It-Yourself project, where I just needed simple script to monitor CPU and do some scale action base on monitor CPU.

Docker provides an command stats, which provides the stats of container running

docker stats display a live stream of the following container(s) resource usage statistics:

  • CPU % usage
  • Memory usage, limit, % usage
  • Network i/o
  • Disk i/o

The stats are updated every second. Here is a sample output:

CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O               BLOCK I/O
4827f0139b1f        10.94%              706.2 MB / 1.045 GB   67.61%              299.7 kB / 2.473 MB   456 MB / 327.3 MB

I was planing to exploit more this command for CPU monitoring.

Docker engine provides Docker Remote API, which is REST API and can be use communicate to the engine. Being REST API, I can call this in any of my favorite language I want.

Since, I was more in for the scripting, python become the prefer choice of language.

I began my search with python libraries which can connect to Docker. This could be very frustrating. When you search Google, various results show up which refer to different version of docker client, but none EXPLICITLY mentions it. It took couple of days for me to figure it out.

Let me point out few examples

There is tutorial at http://containertutorials.com/py/docker-py.html which says

pip install docker-py

>>> from docker import client
>>> cli = Client(base_url='unix://var/run/docker.sock')

There is official docker client from docker, which says as follow

pip install docker

>>> import docker 
>>> client = docker.DockerClient(base_url='unix://var/run/docker.sock')

Now,  here you see, there are two different API to instantiate client.

Advice would be to read the documentation from Github site. They would be the latest

Coming back to problem, get container stats using Docker python client. I have written my pet shop code to get the docker container CPU using Python client. In the remainder of this article, I would be referencing my script code which you can get from
https://github.com/sbrakl/dockercpumonitor

When I was developing this script, I was writing against stand alone docker engine v 1.11, but I intend to run against docker engine 1.11 with swarm 1.25 on TLS.  I had write a method in the clientConn.py – GetDockerClient, where I can connect to local as well as swarm instance by passing the environment parameter. It interesting to know, how to connect to remote TLS enable docker host by passing client certificates.

If you use docker-compose, there would be problem to get the container name. Compose formats container name with the prefix folder name where compose file resides and suffix of the count of containers. i.e. container name like ‘coolweb’ will translate to ‘foldername_coolweb_1′.  utility.py contains the getContainerInComposeMode method, which get container in by formatting container name with compose pattern.

I know, you would be thinking, code isn’t in best form, but it more of jugglery to get it done rather than making it master piece for the world to see.

Moving forward, of getting the docker stat. It come with another surprise, docker python API doesn’t have stats() method on client object. In fact it has stats() method on the container object. So, basically it means, you can’t get stats like you get with docker stats command which gives stats for all the container running on the docker host. Bad! Even people over the internet express their frustration about docker-py  like in this blog.

Holding our focus, moving back to code to get stats about the container. In the utility.py, get_CPU_Percentage method, you will get code get container stats

# 'con' is container which you need to monitor
constat = con.stats(stream=False)

stream=false mean, you will get stats for just one time, and not a continues stream object.

It will give back json object something like this below. It large object, but I have just highlighted the CPU related stuff

{
    "read": "2016-02-07T13:26:56.142981314Z",
    "precpu_stats": {
        "cpu_usage": {
            "total_usage": 0,
            "percpu_usage": null,
            "usage_in_kernelmode": 0,
            "usage_in_usermode": 0
        },
        "system_cpu_usage": 0,
        "throttling_data": {
            "periods": 0,
            "throttled_periods": 0,
            "throttled_time": 0
        }
    },
    "cpu_stats": {
        "cpu_usage": {
            "total_usage": 242581854769,
            "percpu_usage": [242581854769],
            "usage_in_kernelmode": 33910000000,
            "usage_in_usermode": 123040000000
        },
        "system_cpu_usage": 3367860000000,
        "throttling_data": {
            "periods": 0,
            "throttled_periods": 0,
            "throttled_time": 0
        }
    },
    "memory_stats": {
        ...
        },
        "failcnt": 0,
        "limit": 1044574208
    }

precpu_stats are CPU stats before point of reference, say 10 sec. cpu_stats are stats at point in time. If you look more into get_CPU_Percentage method, it juggles from the JSON object, get relevant variables and computes the percentage CPU for the container.

Once I get the CPU in percentage, I have put it in array at the interval of 2 sec. It a fix width array with 5 slots. So, it hold only last 5 reading, i.e. last 10 secs reading in term of time

Then I compute the mean of of array to get mean CPU, so it rules out uneven CPU spikes. I take this mean CPU against CPU threadhold, i.e. 50%. If mean CPU is more than 50%, it will trigger scale out action. If it’s less than 50, it will trigger scale down action

The entire logic to scaling up and down with cooling time in between is in ScaleContaienr method of utility.py.

These all methods are called from main.py, which runs the code in loop.

That’s it. It brings me to the end of Do-It-Yourself project of docker scaling. I know, it’s not the ultimate script when it come to container scaling.