Category Archives: Uncategorized

Publish dotnet core app using MSDeploy

Recently, I got a task to setup CI-CD pipeline for the dotnet core project. CI was easy, CD had challenges.

In CD, I need to deploy on Windows as well as Linux. In this post, I going to describe for the windows. Microsoft did good job in documenting the asp.net core and IIS integration. https://docs.microsoft.com/en-us/aspnet/core/publishing/iis

What it say is, first install IIS, install the .NET Core Windows Server Hosting bundle on the hosting system and then create asp.net core website. This all as part of one time installation on the target machine.

Next part would be deploying new versions of the same ASP.NET core web application again and again. Now, this part needs to be automated as continues deployment (CD) pipeline.

Challenges are

  1. You build the code on one machine and deploy on other
  2. In my case, due to IT policy, I wasn’t allow to create network share folders for deployment
  3. It needs to be secure, as only users who had permissions can only deploy

Now, how do I copy website code from one machine to other machine IIS without network share folder? As I was looking around for a solution, I found Microsoft has already solve this problem way back in 2010 with Web Deploy.

Microsoft was always has confusing name conversion, they market as Web Deploy, but internally called it as MS Deploy. This tool is unsung hero of deployment. When it comes to automated deployment, people talked of chef, puppet but MSDeploy isn’t acknowledged much.

Okay, what this tools does? You can publish website remotely without need to transfer file manually to that machine and setup the website way you want. That’s means, it can create the app pool, set the folder permissions, configure the port, configure the binding, etc. Not just transfer of code files.

Now, there are few good resources over the web, esp from the Vishal Joshi http://vishaljoshi.blogspot.in/search?q=msdeploy and SAYED IBRAHIM HASHIMI http://sedodream.com/SearchView.aspx?q=msdeploy whom has written extensively of on how to work with MS Deploy.

You need to install Web Deploy 3.6 remote agent service on the target machine. Can’t figure what this is, read through the following blog post to install MS Deploy http://chamindac.blogspot.in/2016/05/deploy-aspnet-core-10-to-remote-iis.html

Here, build server in black and target deployment server  in red color.

MSDeploy.png

On the build server, you need to run following command to publish the dotnet core website, dotnet publish -o {path to publish folder}

dotnet publish -o D:\publishcode

Here, I want to deploy website in “Default Website/WebApp” on the target server.

WebappInIIS.png

Use this command to deploy on the build machine. Here, username password would be local administrative user of the target machine.

servernameorip can be computer name like “WL-TPM-348” or could be IP on machine like “10.20.30.15”

By default, MS Deploy is installed in C:\Program Files (x86)\IIS\Microsoft Web Deploy V3 folder. If it’s install elsewhere, but that path for msdeploy.exe.

D:>"C:\Program Files (x86)\IIS\Microsoft Web Deploy V3\msdeploy.exe" -verb:sync -source:iisApp="D:\publishcode" -dest:iisApp="Default Web Site/WebApp",ComputerName="servernameorip",UserName='ServerUser',Password='password',AuthType='NTLM' -allowUntrusted

Mind, this is bare metal command to deploy code. It doesn’t have SiteManifest.xml which configures IIS Web application for app pool, ports, binding, etc. Neither it’s paramterize. But, it good example to get started.

Hope, you find this post useful

 

 

 

 

 

Advertisements

Stressor – The Container

I have been working on the docker auto-scaling stuff, and got the need of container which could stress the CPU

stress

In Linux, there is stress utility which would stress the CPU, simple and sweet. Now, I need to put this stress utility on container and all get to go. Quick google got me few containers on docker hub, which were already build for this job, Great!

But, I had problems with these. First, they would start stressing the CPU as soon they start, this is not what I need in my scenario. Secondly,  I can’t fire command remotely, i.e. out container and docker host, I don’t have control on when to start, how much CPU to stress and how much time I should let them run

There is another utility, lookbusy, which let me control how much CPU percentage I want to stress. It was important for me, a utility which gives me control on the CPU percentage, say 70% load unlike stress utility, where I need to do trial and error to find what number would stress my CPU to 70%

Second, I had these container running behind a load balancer. I got an idea, where I could simply develop python Flask web app. This would serve Web UI where I could mention the percentage of CPU and time to stress, and in the hood it would use lookbusy to stress the CPU. This away, even without accessing the docker host, I can stress host CPU remotely from a browser.

I created a flask app which would stress my CPU and containerize it. I named it stressor. You can get the flask code from Github.

You can start the container with following command

docker run -p 80:5000 -name stressor sbrakl/stressor

Here, if port 80 is used by other application, you chose whatever available on your machine, say 5000.

This container run flask app on port 5000.

Now, this come in three flavours

flavours-logo

flaskappwithWerkzeug

Flask stress app running on Werkzeug web server. It good for light weight concurrent loads, but bad for 5+ concurrent load

flaskappwithSSL

Same as flaskappwithWerkzeug, but configure to run on SSL. It useful in scenarios, where you need to configure containers behind load balancer. This will test load balancer for SSL traffic.

flaskappwithuwsgi

Flask app is configure to run on the uwsgi and nginx webserver. It configure to run 16 concurrent request.

You can find more information about configuration on Github repository

Container scaling via python script

If you need to monitor docker containers, there are plenty of tools available in market. Some are paid, some are free but you need to do your homework to find which suits you the best.

If you are interest in tools, let me name the few cAdvisor, influx DB, Prometheus.io, Sematext, Universal Control Plane. This is not the definitive list, but go to start with.

For me, it was more of Do-It-Yourself project, where I just needed simple script to monitor CPU and do some scale action base on monitor CPU.

Docker provides an command stats, which provides the stats of container running

docker stats display a live stream of the following container(s) resource usage statistics:

  • CPU % usage
  • Memory usage, limit, % usage
  • Network i/o
  • Disk i/o

The stats are updated every second. Here is a sample output:

CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O               BLOCK I/O
4827f0139b1f        10.94%              706.2 MB / 1.045 GB   67.61%              299.7 kB / 2.473 MB   456 MB / 327.3 MB

I was planing to exploit more this command for CPU monitoring.

Docker engine provides Docker Remote API, which is REST API and can be use communicate to the engine. Being REST API, I can call this in any of my favorite language I want.

Since, I was more in for the scripting, python become the prefer choice of language.

I began my search with python libraries which can connect to Docker. This could be very frustrating. When you search Google, various results show up which refer to different version of docker client, but none EXPLICITLY mentions it. It took couple of days for me to figure it out.

Let me point out few examples

There is tutorial at http://containertutorials.com/py/docker-py.html which says

pip install docker-py

>>> from docker import client
>>> cli = Client(base_url='unix://var/run/docker.sock')

There is official docker client from docker, which says as follow

pip install docker

>>> import docker 
>>> client = docker.DockerClient(base_url='unix://var/run/docker.sock')

Now,  here you see, there are two different API to instantiate client.

Advice would be to read the documentation from Github site. They would be the latest

Coming back to problem, get container stats using Docker python client. I have written my pet shop code to get the docker container CPU using Python client. In the remainder of this article, I would be referencing my script code which you can get from
https://github.com/sbrakl/dockercpumonitor

When I was developing this script, I was writing against stand alone docker engine v 1.11, but I intend to run against docker engine 1.11 with swarm 1.25 on TLS.  I had write a method in the clientConn.py – GetDockerClient, where I can connect to local as well as swarm instance by passing the environment parameter. It interesting to know, how to connect to remote TLS enable docker host by passing client certificates.

If you use docker-compose, there would be problem to get the container name. Compose formats container name with the prefix folder name where compose file resides and suffix of the count of containers. i.e. container name like ‘coolweb’ will translate to ‘foldername_coolweb_1′.  utility.py contains the getContainerInComposeMode method, which get container in by formatting container name with compose pattern.

I know, you would be thinking, code isn’t in best form, but it more of jugglery to get it done rather than making it master piece for the world to see.

Moving forward, of getting the docker stat. It come with another surprise, docker python API doesn’t have stats() method on client object. In fact it has stats() method on the container object. So, basically it means, you can’t get stats like you get with docker stats command which gives stats for all the container running on the docker host. Bad! Even people over the internet express their frustration about docker-py  like in this blog.

Holding our focus, moving back to code to get stats about the container. In the utility.py, get_CPU_Percentage method, you will get code get container stats

# 'con' is container which you need to monitor
constat = con.stats(stream=False)

stream=false mean, you will get stats for just one time, and not a continues stream object.

It will give back json object something like this below. It large object, but I have just highlighted the CPU related stuff

{
    "read": "2016-02-07T13:26:56.142981314Z",
    "precpu_stats": {
        "cpu_usage": {
            "total_usage": 0,
            "percpu_usage": null,
            "usage_in_kernelmode": 0,
            "usage_in_usermode": 0
        },
        "system_cpu_usage": 0,
        "throttling_data": {
            "periods": 0,
            "throttled_periods": 0,
            "throttled_time": 0
        }
    },
    "cpu_stats": {
        "cpu_usage": {
            "total_usage": 242581854769,
            "percpu_usage": [242581854769],
            "usage_in_kernelmode": 33910000000,
            "usage_in_usermode": 123040000000
        },
        "system_cpu_usage": 3367860000000,
        "throttling_data": {
            "periods": 0,
            "throttled_periods": 0,
            "throttled_time": 0
        }
    },
    "memory_stats": {
        ...
        },
        "failcnt": 0,
        "limit": 1044574208
    }

precpu_stats are CPU stats before point of reference, say 10 sec. cpu_stats are stats at point in time. If you look more into get_CPU_Percentage method, it juggles from the JSON object, get relevant variables and computes the percentage CPU for the container.

Once I get the CPU in percentage, I have put it in array at the interval of 2 sec. It a fix width array with 5 slots. So, it hold only last 5 reading, i.e. last 10 secs reading in term of time

Then I compute the mean of of array to get mean CPU, so it rules out uneven CPU spikes. I take this mean CPU against CPU threadhold, i.e. 50%. If mean CPU is more than 50%, it will trigger scale out action. If it’s less than 50, it will trigger scale down action

The entire logic to scaling up and down with cooling time in between is in ScaleContaienr method of utility.py.

These all methods are called from main.py, which runs the code in loop.

That’s it. It brings me to the end of Do-It-Yourself project of docker scaling. I know, it’s not the ultimate script when it come to container scaling.

Client Auth with Interlock and Nginx

I had the requirement of setting Interlock + Nginx where backend expects client authentication. If you have directly landed up here, to get the context about service discovery and interlock and Nginx, read my previous post.

Note: This topic applies to Interlock 1.3 and Docker 1.11. If you are using docker > 1.12, I recommended to use inbuilt docker load balancer, which ships with swarmkit.

Problem Definition:

Setup client authentication certificates with Interlock + Nginx

Why it a problem: 

Interlock controls the Nginx configuration file. You can’t directly modify the Nginx configuration file, as Interlock would be overwrite when a container starts or dies.

Interlock allows certain data labels which allows you to configure the Nginx configuration file. Read Interlock data label section of previous post for more info.

There are data label to set SSL certificate, set SSL only, set SSL backend, etc. But, there aren’t any labels to set SSL proxy certificate. I had eve raise an issue, to found it not supported.

No data label to configure client authentication certificates is the problem

Possible Solution

If you need to set client authentication certificates with Nginx, serverfault threads hints how to do

backend {
 server some-ip:443;
}

server {
 listen 80;


   location / {
      proxy_ssl_certificate certs/client.crt;
      proxy_ssl_certificate_key certs/client.key;

      proxy_pass https://backend;
   }
}

Now, I need to find a way with Interlock, where I could get control of template it uses for configuring the interlock

Hint’s from the interlock docs, where it shows configuration variable TemplatePath  in the toml configuration. It allows us to give the template, which it would use with variable substitution to create final Nginx config.

Again, I can get the example of this template file in interlock docs.

I found this template file a perfect opportunity to modify the template to include client auth certificates in template and use.

 location / {
 # Added by Shabbir 9th Dec 2016, For Client Authentication
 
 proxy_ssl_certificate /certs/client.crt;
 proxy_ssl_certificate_key /certs/client.key;
 proxy_ssl_password_file /certs/pass.txt;
 # Change End
 
 {{ if $host.SSLBackend }}proxy_pass https://{{ $host.Upstream.Name }};{{ else }}proxy_pass http://{{ $host.Upstream.Name }};{{ end }}
 }

This certificate needs to be present on the machine where Nginx container would be launch, and they are added to container via volume mounts.

Here, the extract of docker-compose file, which configures nginx container

nginx:
   image: nginx:latest
   entrypoint: nginx
   networks:
     - common     
   ports:
     - 8443:8443
     - 8009
   depends_on:
     - interlock
   command: -g "daemon off;" -c /etc/nginx/nginx.conf
   restart: unless-stopped
   labels:
       interlock.ext.name: nginx
   environment:
       - "constraint:com.function==interlock"
   volumes:
       - ~/myapp/certs:/certs
       - ~/myapp/logs:/var/log/nginx

This is how I solve the issue of client authentication, but this technique could be use to configure interlock for all the unsupported Nginx scenarios like tcp pass through, etc

 

 

 

Test WCF Service from the MEX binding for NamedPipe Connection

Test WCF Service from the MEX binding for NamedPipe Connection

It when into the trouble when, there was requirement to change the WCF binding from TCP to NamedPipe for one of the project

The problem was, there were two WebSites, which were hosting the WCF Service. One service in each one of the website.

Now, as for the TCP Binding, website where separated by the port number. So, base address was something like

Net.tcp://localhost:6200/Service1.svc

Net.tcp://localhost:6400/Service2.svc

But, when doing for the NamedPipe, there aren’t port numbers in NamedPipe. So, it needed to have scheme like net.pipe://{someuniquename}/Service1.svc

But, the problem was, service was hosted in the WAS under IIS. I have no control on the addresses, as it would be dictated by the IIS

View my last post, which mention how to enter IIS binding Information to get unique hostname for namedpipe configuration

Now, I have added ABC and DEF in the binding information for each site. Now, my namedpipe endpoint address where

net.pipe://ABC/Service1.svc

net.pipe://DEF/Service2.svc

Now, I have my Server side, Host Endpoint as following

   1: <service name="ABC.Service.AuthenticationGateway.ContentAccessService"                     

   2:               behaviorConfiguration="ABC.ServiceHost.ServiceBehavior">                    

   3:        <host>                    

   4:          <baseAddresses>                    

   5:            <add baseAddress = "net.pipe://localhost/" />                    

   6:          </baseAddresses>                    

   7:        </host>                    

   8:        <endpoint address="ContectAccessService"

   9:                    

  10:                  binding="netNamedPipeBinding"

  11:                  bindingConfiguration="NetPipeBinding_IContectAccessService"

  12:                  contract="ABC.Service.AuthenticationGateway.IContectAccessService" />

  13:                    

  14:        <endpoint address="mex" binding="mexNamedPipeBinding" contract="IMetadataExchange" /> 

  15: </service>

Now, here relative address is ContectAccessService. So, my absolute url would be net.pipe://ABC/Service1.svc/ContectAccessSerice.

This is the browse the url for svc file in the url, you can see the wsdl for the service. Mine address was http://localhost:62/Service1.svc?wsdl. Mine, Website was hosted on 62 port.

Doing that, I need to test my Namedpipe service. I was using WCFTestClient, which is under C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE. Replace yellow part which wherever you have installed Visual Studio. 10.0 would be replaced by version for 2010/2012/2013. For VS 2010, it’s 10.0.

I was trying to address in pink in WcfTestClient.exe add service dialog

Doing so, I use to get the error

Error: Cannot obtain Metadata from net.pipe://ABC/Service1.svc/ContectAccessService If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at http://go.microsoft.com/fwlink/?LinkId=65455.WS-Metadata Exchange Error URI: net.pipe://ABC/Service1.svc/ContectAccessService Metadata contains a reference that cannot be resolved: ‘ net.pipe://ABC/Service1.svc/ContectAccessService ‘. <?xml version=”1.0″ encoding=”utf-16″?><Fault xmlns=”http://www.w3.org/2003/05/soap-envelope”><Code><Value>Sender</Value><Subcode><Value xmlns:a=”http://www.w3.org/2005/08/addressing”>a:ActionNotSupported</Value></Subcode></Code><Reason><Text xml:lang=”en-US”>The message with Action ‘http://schemas.xmlsoap.org/ws/2004/09/transfer/Get&#8217; cannot be processed at the receiver, due to a ContractFilter mismatch at the EndpointDispatcher. This may be because of either a contract mismatch (mismatched Actions between sender and receiver) or a binding/security mismatch between the sender and the receiver. Check that sender and receiver have the same contract and the same binding (including security requirements, e.g. Message, Transport, None).</Text></Reason></Fault>

I was baffled and perplex by the error. There is already, mex binding expose, then, why isn’t the ‘WCF client’ can’t get the metaexchange data?

It found, that error was silly from my side. The address which need to input in WCF Test Client was net.pipe://ABC/Service.svc     instead of

net.pipe://ABC/Service1.svc/ContectAccessSerice.

Cleared ITIL Examination of Monday

It great ­to clear the ITIL Foundation Exin-117 examination on Monday. It was 2 week journey for deciding to prepare for exam to exam itself.

For those of my friends, who are preparing for ITIL foundation, here are some tips you. I have not taken any classes for ITIL foundation. I relied on the material available online for clearing the exams.

First, there is great article on the web for ITIL foundation at which will resolve most of the quires. http://www.itskeptic.org/pass-itil-v3-foundation-exam-six-easy-and-free-ste

I then enroll for free ITIL training www.freeitiltraining.com, It gave me pdf and series of Video to watch. But, videos where boring and I couldn’t have any sense out of ITIL service management. Luckily, I got a torrent link http://kat.ph/itil-v3-the-art-of-service-online-learning-videos-bani-2009-t3202738.html which gave the “ITIL v3 – The Art of Service Online Learning Videos” and full course. This is wonderful course, which made sense to most of the things. But, warning here, these course in obsolete. It base on ITIL v3 2009 format. ITIL v3 was revised in 2011 and now, instead of v2 and v3, it just single version of ITIL in English @ September 2012 known as ITIL.

So, If you are really interested with the up to date version of ITIL 2011, buy http://store.theartofservice.com/certification-kits/itil/itil-2011/itil-2011-foundation-complete-certification-kit-fourth-edition-study-guide-ebook-and-online-course.html it just USD 99, and is worth the very cent you spend on it. Here, my point of view, foundation exam will cost around USD 200, it worth to spend USD 100 for it to pass in first attempt.

In Mumbai, ITIL exam can be taken by either prometric, Pearson Vue or give exam from your own laptop with exin anywhere exam. For more information on Exin anywhere exam, refer http://www.exin.com/US/en/exams/exin-anywhere-exams-online

I had given to Prometric because it cheapest among all the options, US 99. I chose Prometric center at TALENT EDGE FROM KARROX, 7TH FLR, BHAVESHWAR ARCADE, L.B.S.ROAD, GHATKOPAR WEST, MUMBAI, 400086 India. As for the feedback on the center, it was great, they had my booking in advance. When, I went to center, they straight away put me to test, and once complete, they hand me the result. Tip here, as Prometric center require two ID of you which has photo, address and signature. I took mine Passport and driving license. In order to save time, I took photocopy (Xerox) of the same, so, when they ask for it, I just hand over to them, which save the time (15 mins) for taking them the photocopy, and when straight to exam terminal

One more point here, when you go to prometric site for booking ITIL exam, it will redirect you to http://www.exin.com/NL/en/exams/&exam=itil-v3-foundation. This site will give you booking option for Exin Anywhere exam which will cost you USD 179 (price @ Sep 2012). But, click on start here, which will redirect to prometric exam site, which direct to choose ITIL exam through Prometric, which cost USD 99.

 

 

That’s all for the center, also, as for the study material, Art of service material was sufficient for preparation. If you want more information, there is official guide OGC ITIL v3 Service Lifecycle Introduction ITIL. It contains maps, flowcharts and diagrams, which give you good visualization about the ITIL process. Also, material mention on ITSkeptic site (http://www.itskeptic.org/pass-itil-v3-foundation-exam-six-easy-and-free-ste) like ITIL process wiki and so on will help you understand you understand ITIL process responsibility and how they stich in one and other. Even, how ITIL phase like Service Strategy, Design, Transaction, Operation work together and how CSI play important role in each phase. I was surprise to find Service Level Management, which as per ITIL definition, belong to Service Design phase, plays important role in all phases. See following link for more information, http://itil.osiatis.es/ITIL_course/it_service_management/service_level_management/overview_service_level_management/overview_service_level_management.php

Another word of caution, many of the material you get on the net, would be of ITIL v3 2009 version, not of 2011 version. Even thought, 2011 version, is just update of v3 2009, and hence, many of concepts (90%) still hold true, and can pass exam really on that material. But, there are some pdfs available and even video by pultorak.com which give the difference between two http://www.youtube.com/watch?v=jAi_280EVao

But, just preparation won’t pass exam, like very competitive exam, you need to give mock test to judge yourself and pass. I got the vce from exam EXIN.CertifyMe.EX0-117.v2012-07-01.by.Jayz.107q. It got from http://www.examcollection.com/exin/EXIN.CertifyMe.EX0-117.v2012-07-01.by.Jayz.107q.vce.file.html which contain latest 153 question and would definitely help you to pass exam if you give this exam couple of times.
Just piece of advice here, when giving mock exam, remember, you get the concept behind question rather than just remembering answer, as in real exam, wording may change, but, concept and context will remain same.

That’s all from my side for preparation of ITIL foundation exam, do leave comments if you like this article or have any queries and feedback for it.