Client Auth with Interlock and Nginx

I had the requirement of setting Interlock + Nginx where backend expects client authentication. If you have directly landed up here, to get the context about service discovery and interlock and Nginx, read my previous post.

Note: This topic applies to Interlock 1.3 and Docker 1.11. If you are using docker > 1.12, I recommended to use inbuilt docker load balancer, which ships with swarmkit.

Problem Definition:

Setup client authentication certificates with Interlock + Nginx

Why it a problem: 

Interlock controls the Nginx configuration file. You can’t directly modify the Nginx configuration file, as Interlock would be overwrite when a container starts or dies.

Interlock allows certain data labels which allows you to configure the Nginx configuration file. Read Interlock data label section of previous post for more info.

There are data label to set SSL certificate, set SSL only, set SSL backend, etc. But, there aren’t any labels to set SSL proxy certificate. I had eve raise an issue, to found it not supported.

No data label to configure client authentication certificates is the problem

Possible Solution

If you need to set client authentication certificates with Nginx, serverfault threads hints how to do

backend {
 server some-ip:443;
}

server {
 listen 80;


   location / {
      proxy_ssl_certificate certs/client.crt;
      proxy_ssl_certificate_key certs/client.key;

      proxy_pass https://backend;
   }
}

Now, I need to find a way with Interlock, where I could get control of template it uses for configuring the interlock

Hint’s from the interlock docs, where it shows configuration variable TemplatePath  in the toml configuration. It allows us to give the template, which it would use with variable substitution to create final Nginx config.

Again, I can get the example of this template file in interlock docs.

I found this template file a perfect opportunity to modify the template to include client auth certificates in template and use.

 location / {
 # Added by Shabbir 9th Dec 2016, For Client Authentication
 
 proxy_ssl_certificate /certs/client.crt;
 proxy_ssl_certificate_key /certs/client.key;
 proxy_ssl_password_file /certs/pass.txt;
 # Change End
 
 {{ if $host.SSLBackend }}proxy_pass https://{{ $host.Upstream.Name }};{{ else }}proxy_pass http://{{ $host.Upstream.Name }};{{ end }}
 }

This certificate needs to be present on the machine where Nginx container would be launch, and they are added to container via volume mounts.

Here, the extract of docker-compose file, which configures nginx container

nginx:
   image: nginx:latest
   entrypoint: nginx
   networks:
     - common     
   ports:
     - 8443:8443
     - 8009
   depends_on:
     - interlock
   command: -g "daemon off;" -c /etc/nginx/nginx.conf
   restart: unless-stopped
   labels:
       interlock.ext.name: nginx
   environment:
       - "constraint:com.function==interlock"
   volumes:
       - ~/myapp/certs:/certs
       - ~/myapp/logs:/var/log/nginx

This is how I solve the issue of client authentication, but this technique could be use to configure interlock for all the unsupported Nginx scenarios like tcp pass through, etc

 

 

 

Advertisements

Service discovery sauce with Interlock and Nginx

Poster Information: If you need to know more about the Service discovery, what options are available and how things change in docker 1.12, check out my service discovery post.

This article applies to docker standalone swarm compare to docker 1.12 swarmkit, where docker engine has integrated swarm mode.

Note: This point forward, in this article, I am referring to swarm, I am referring to standalone swarm and not swarmkit one.

I started the work in Docker 1.11 era, when the swarm was a separate tool from the docker engine, and you need to launch a couple of swarm containers to setup the swarm cluster.

IMHO, interlock + Nginx is poor man’s tools in terms of service discovery. There would be better options available, but for me, it all started with taking a look at swarm at scale example at the docker site. They have shown how to use interlock with Nginx for load balancing and service discovery. Not knowing much on service discovery, and having working example demonstrate was good enough for me to engage with interlock.

Interlock is swarm event listener tool, which listens to the swarm events and performs a respective action on the extension. As of current (Dec 2016), it supports only two extensions, Nginx and HAProxy. Both acts as service discovery and load balancer. There are another extension planned called beacon, which would be used for monitoring and autoscaling perhaps, but now seem to be abandon, thanks to docker 1.12 swarmkit

In simple terms, there are three actors in the play. Swarm manager, Interlock and Nginx acting as a load balancer. Best part, all three runs as the docker container. It means no installation/configuration at host VM and easy to set them up.

Interlock listens to swarm master for start or stop container events. When it hear something about it, it updates the Nginx config. Below animated diagrams explains it better

Interlock.gif

Interlock play in action

Now, we know “what” part of the Interlock, let move towards the “how” part. Unfortunately, there aren’t much documentation on the interlock available on the net. Interlock QuickStart guide provides some clue, but it missing the enterprise part. It doesn’t guide much if you are using docker-compose with the multi-host environment.

For later part, you can draw some inspiration from the Docker at scale, and there is obsolete lab article which shows how to use interlock with docker-compose, but the interlock commands are obsolete for 1.13 (was latest in Oct 2016) version.

I am not planning to write entire to-do article for interlock, but intent to hints some useful when running is multi-host docker cluster with interlock.

The first part is docker swarm cluster, articles like Docker at scale and codeship used docker-machine to create an entire cluster on your desktop/laptop.  I have been more privilege to use R&D cloud account and use Azure Cloud to create my Docker cluster. You use tools like UCP, docker machine, ACS, docker cloud and many other are there in the market or just create cluster manually handheld. It doesn’t matter, where you run your cluster and how did you create it, as long you have working swarm cluster, you are good to play with Interlock and Nginx

Another piece of advice, while setting up swarm cluster, it not mandatory, when good practice to have a dedicated host for interlock and Nginx container. You can see swarm at scale article, where they use docker engine labels to tag particular host for tagging.

If you are docker-machine, you can give docker engine labels similar to below

Docker engine labels.png

And in the docker-compose.yml file, you would specify the contrains that container would be load at com.function=interlock

interlock-compose

Interlock in docker-compose.yml file

Now, in order to prepare interlock sauce, you need following ingredients to set it right

  1. Interlock Config (config.toml)
  2. Interlock data labels
  3. (Optional) SSL certificates

Interlock Configuration File

Interlock uses a configuration store to configure options and extensions. There are three places where this configuration can be saved

1) File,  2) Environment variable or 3) Key value store

I find it convenient to store it in a file. This file by Interlock convention is named as config.toml.

Content: This file contains key-value options which are use by interlock and it’s extension. For more information, you can see https://github.com/ehazlett/interlock/blob/master/docs/configuration.md

Location: If you are running Swarm on multi-host, this file needs to be present  on a VM which will host interlock container. You can then mount this file to a container by volume mapping. See docker-compose file above for more info

TLS Setting: If you are running Swarm on TLS, you need to set TLSCACert , TLSCert, TLSKey variable in toml file. For info, read setting Docker on TLS and Swarm on TLS.

TLSCACert = “/certs/ca.pem”
TLSCert = “/certs/cert.pem”
TLSKey = “/certs/key.pem”

Plus, this certificate needs to be present on a VM which will host interlock container. You can then mount this certificates via volume mount in the compose file. See docker-compose file above for example

PollInterval: If your interlock is not able to connect to Docker swarm, try setting the PollInterval to 3 seconds. In some environments, the event stream can be interrupted and hence Interlock need to rely on pooling mechanism

Interlock Data Label

Now, we have just setup Interlock with Nginx. If you have carefully observe the config.toml file, nowhere we have given which container we need to load balance. Then, how would interlock get this information from?

This brings us to the Interlock Data Labels. It the set of labels you pass to the container, which when Interlock inspect, would know, which containers it needs to load balance.

Here below example shows how to pass Interlock label along with other container labels.

interlockdata

Example of Interlock Data labels in docker-compose.yml

You can get more information about the data labels at https://github.com/ehazlett/interlock/blob/master/docs/interlock_data.md

There is another example from Interlock repo itself, where it how to launch interlock with Nginx as load balancer in docker swarm via compose.

https://github.com/ehazlett/interlock/blob/master/docs/examples/nginx-swarm/docker-compose.yml

(Optional) SSL certificates

As seen the above Interlock label, there are lot of interlock variable related to SSL.

To understand better, we will enumerate to different combinations with SSL, we can setup load balancer

I) NO SSL

You can have flow something like this

OnlyHTTP.png

HTTP Only

Here, we are not using SSL at all.

II) SSL Termination at the Load Balancer 

Or you if you planning to use Nginx as the frontend internet facing load balancer, you should do something like this

HTTPS Termination.png

SSL Termination at the load balancer

III) Both lags with SSL

In my case, there was compliance requirement where all the traffic, internal or external needs to be SSL/TLS. So, I need to do something like this

HTTPS Only.png

HTTPS only traffic

For Case II and III, you need to set interlock SSL related data label. Let me give quick explanation of important ones

interlock.ssl_only : It you want your load balancer to list to HTTPS traffic only, set this to true. If false, the interlock configure Nginx to listen to both, HTTP and HTTPS. If true, then it set redirection rule in HTTP to redirect it to HTTPS

interlock.ssl_cert: This needs to be the X509 certificate path which load balancer will use to server frondend traffic. This certificate Common Name equal to load balancer name.  Plus, in multi-host environment, this certificates needs to be present on the machine which launch the Nginx container. You can then mount this file to a container by volume mapping. See docker-compose file above for more info

interlock.ssl_cert_key: Private key from X509 certificate. Same goes with key, it needs to be on the VM which will run Nginx container.

If your backend requires certificate client authentication, as it was in my case, then interlock has no support for it. But, there is a hack to SSL proxy the certificates. But, that for the another post.

Hope, information I share with you was useful. If you want any help, do write in comments below

What the heck is Service Discovery?

If you are working with container technologies like Docker Swarm, kubernetes or Mesos,  sooner or later you will stumble upon service discovery.

What the heck is service discovery? In layman’s term, it the way one container to know where another container is. Better explain with an example, where Web container needs to connect to DB container, it needs to know the address of DB container.

In container world, containers are like scurry of squirrels. They keep of jumping from one host (may call it VM) to other, moving one cluster to other for the cause of high availability and scalability.

In the tantrum, how does one container connect with other? Here comes the part of Service Discovery, a thing which can tell the present address of container/s. In layman’s term, imagine it to be like a librarian, who tell this book (container) is with Joe, Brandan or anybody else. Service Discovery is like a directory of addresses (Hostname, IP, Port) on all the containers running in a cluster.

If a new container is born or died, this Service Discovery updates its directory to make new entry or delete its entry.

Note, I have explain Service Discovery in it’s core function, but discovery (directory) is not the only function it performs, tools which implement service discocery provides additional functions like load balancing, health check, NATing, bundling, etc. But, if the tools provide all the additional function but not discovery, then it’s should be called Service discovery.

Now, knowing “what” is Service Discovery, come the next part “how”. How do we achieve Service discovery in a docker cluster? There is no single answer to this, answer is “it depends”.

It depends on what stack are you using for docker clustering, is it Mesos, kubernates, Nomad, Docker Swarm.

A stack come with related set of tools for scheduling, storage, scaling, batch execution, monitoring, etc. Stack is collective term of tools. Some stack has all, some stack has few. Service discovery would be part of the sets of tools your choosen stack provide.

Mesos, kubernates has DNS based service discovery. Nomad use consul (service registry). Docker swarm has two stories. If you are using docker prior to 1.12 with swarm, you would use consul, etcd or pick one from large open source community for service discovery. If you are using Docker 1.12 and all next versions, it comes with integrated swarm discovery with based on DNS server embedded in the swarm.

If you are not sure, what I am taking about all the these stack, I would suggest, take a step back,  make google your best friend and try to research what these stack means, what sets of tools and capability they offer, and how they differ for each other. Some are simple and easy to learn, other has many features but also steep learning curve.

If you are new and incline to docker swarm stack, eye close, go with latest docker with integrated swarm. While I was writing post in Dec 2016, Docker 1.3 was in beta, when you would be reading, do research and find out what latest version docker it would be running.

 

When I was working, that was era of Docker 1.11 and I did service discovery using poor man’s tools, Interlock with Ngnix. Why? I find it to be easiest to work with it when you working with Swarm, Consol plus it provides some cool features like load balacing, health checks and ssl offloading (which are nothing, but feature of load balancer).

I would be writting another post of sharing my experience with Interlock and Nginx.

 

 

Marathon LB service discovery

Solving mystery of the service discovery with Azure ACS DCOS – Part 2

Warning, it Level 300 deep dive topic, novice won’t able to get it. It meant to help wandering souls like me in scarcity of document to explain service discovery with Apache Mesos

In the last post, I have written about the service discovery option with Mesos. I have blabbered about the Mesos DNS. In this post, I would  solve the mystery of how service discovery happens in the sample app deployed in Azure ACS load balance tutorial

Mystery solving part

This post is about solving mystery of sample load balancing app launch in the Azure ACS load balance tutorial

Now here, following the tutorial, we have launch App via marathon. Note, this is import aspect about DCOS cluster. You can launch docker using Marathon or Aurora.
Web App Configuration
{
  “id”: “web”,
  “container”: {
    “type”: “DOCKER”,
    “docker”: {
      “image”: “yeasy/simple-web”,
      “network”: “BRIDGE”,
      “portMappings”: [
        { “hostPort”: 0, “containerPort”: 80, “servicePort”: 10000 }
      ],
      “forcePullImage”:true
    }
  },
  “instances”: 3,
  “cpus”: 0.1,
  “mem”: 65,
  “healthChecks”: [{
      “protocol”: “HTTP”,
      “path”: “/”,
      “portIndex”: 0,
      “timeoutSeconds”: 10,
      “gracePeriodSeconds”: 10,
      “intervalSeconds”: 2,
      “maxConsecutiveFailures”: 10
  }],
  “labels”:{
    “HAPROXY_GROUP”:”external”,
    “HAPROXY_0_VHOST”:”YOUR FQDN”,
    “HAPROXY_0_MODE”:”http”
  }
}
Here, observe serviceport: 10000. The servicePort is the port that exposes this service on marathon-lb. By default, port 10000 through to 10100 are reserved for marathon-lb services, so you should begin numbering your service ports from 10000 (that’s 101 service ports, if you’re counting). You may want to use a spreadsheet to keep track of which ports have been allocated to which services for each set of LBs.
Now, here I have hostPort is 0, that’s means that Marathon will arbitrarily allocate an available port on that host. This is import aspect of docker hosting. If, I hard code say port 80 in the configuration, ability to scale container is limited.
Take an example, I have 2 agents, and I want to launch 3 containers of the web. Then, the first container will go to agent1, and port 80 on agent1 would be mapped to port 80 on the container. The second container will go to agent2 as agent1 port 80 is taken. The third container would fail to start because there aren’t any port 80 available on both the agent.
Carrying forward the same example, with host port 0, marathon will dynamic port would be assign on hosting say 5252. The second container could have 5253 and third could have 5254 based on the availability of port on that host.
But, next problem how is how will other container call this 3 containers?
There some marathon-lb service, which acts as the load balancer and load-balances web requests to this containers.
What would be load balancer address which you would use to load balances this request?
In order to answer this, we need to understand how Mesos DNS space works.
When this application is launch, it would have DNS of <task>.<service>.mesos i.e. in our case, [web] app which we launch using the json translate to web.marathon.mesos.
If this would be single instance of [web] app with hard coded port 80, then Mesos DNS would register web.marathon.mesos to IP of the AgentVM where it been deployed and access http://web.marathon.mesos/ on master VM would land the UI of the web application.
But, now we have three instances on the app sitting on different agent VM listing to arbitrary port. Here where Marathon load balancer comes in picture. Service Port declare above app configuration is used by Marathon-lb to provide the endpoint to listen to web service.
In load balance scenario, marathon-lb provides the load balance endpoint on <marathon-lb-name>.<framework-name>.mesos:<service port number>. In our case, this translate to marathonlb-default.marathon.mesos:10000 where 10000 is the service port configure on the marathon-lb
Complete communication from browser to the container.
  1. Browser hit the Azure Load balancer on port 80
  2. Azure load balancer forwards request to VMSS in public subnet
  3. In our case, public VMSS has just one VM running Marathon-LB
  4. Marathon-LB is base on HAProxy, which has configuration to listen on port 80 and forward to marathonlb-default.marathon.mesos:10000
  5. marathon-defaultlb is created on the service port definition, which again load balance request to child container running in the private subnet of the cluster
Below diagram tries to explain in the overview.
 Marathon LB service discovery
Now, here base on marathon app definition, it was listing to service port 10000.
If database server needs to hit rest endpoint on the web service, it needs to point to marathonlb-default.marathon.mesos:10000
Database server can register itself on marathon-lb with 10100 port
Marathon-LB endpoint for DB would be marathonlb-default.marathon.mesos:10100
Web can access just marathonlb-default-marathon.mesos:10100 and it will route traffic to one of the instances of container running in cluster
Checking the HAProxy stats
Before that, in ACS-Mesos
  1. Open the port 9090 in public Network Security Group
  2. Add port 9090 in load balancer rule
Now, access haproxy stats of
To access haproxy config in LUA language
There are more, you can reference in below link
This in nutshell represents the ACS-Mesos service discovery using Mesos

Solving mystery of the service discovery with Azure ACS DCOS – Part 1

I been currently working with Azure Container service, and was working with Mesosphere DCOS, Marathon and Mesos to design insanely scalable architecture of 10000 of nodes.

There is a nice tutorial on the Azure website, where it how to deploy the app on the DCOS cluster with Marathon Load balancer.

If you are new to DCOS, Marathon and Mesos, I recommend you to read my previous post which gives you the peep into Docker cluster world.

This post is Level 300, deep dive for people, who need to understand how does service discovery works in Mesosphere DCOS ecosystem.

What is Service Discovery?

Service discovery allows network communication between services.  In Mesos space, containers are known as services. So Service discovery would be knowing well known address of other containers running in the cluster.

There is another post, which I wrote on the Service discovery explained in layman’s term. Check it out.

In DCOS Mesos, this happen in two ways

MesosServiceDiscovery.png

Mesos DNS

Mesos-DNS is a stateless DNS server for Mesos. Contributed to open source by Mesosphere, it provides service discovery in datacenters or cloud environments managed by Mesos.

What Mesos DNS offer?

Mesos-DNS offers a service discovery system purposely built for Mesos. It allows applications and services running on Mesos to find each other with DNS, similarly to how services discover each other throughout the Internet. Applications launched by Marathon or Aurora are assigned names like search.marathon.mesos or log-aggregator.aurora.mesos. Mesos-DNS translates these names to the IP address and port on the machine currently running each application. To connect to an application in the Mesos datacenter, all you need to know is its name. Every time a connection is initiated, the DNS translation will point to the right machine in the datacenter.

 

How does it work?

ServiceDiscoveryWorking

Mesos-DNS periodically queries the Mesos master and retrieves the state of all running applications for all frameworks. It uses the latest state to generate DNS records that associate application names to machine IP addresses and ports. Mesos-DNS operates as the primary DNS server for the datacenter. It receives DNS requests from all machines, translates the names for Mesos applications, and forwards requests for external names, such as http://www.google.com, to other DNS servers. The configuration of Mesos-DNS is minimal. You simply point it to the Mesos masters at launch. Frameworks do not need to communicate with Mesos-DNS at all. As the state of applications is updated by the Mesos master, the corresponding DNS records are automatically updated as well.
Mesos-DNS is simple and stateless. Unlike Consul and SkyDNS, it does not require consensus mechanisms, persistent storage, or a replicated log. This is possible because Mesos-DNS does not implement heartbeats, health monitoring, or lifetime management for applications. This functionality is already available by the Mesos master, slaves, and frameworks. Mesos-DNS builds on it by periodically retrieving the datacenter state from the master. Mesos-DNS can be made fault-tolerant by launching with a framework like Marathon, that can monitor application health and re-launch it on failures.
Mesos-DNS defines the DNS top-level domain .mesos for Mesos tasks that are running on DC/OS. Tasks and services are discovered by looking up A and, optionally, SRV records within this Mesos domain. To enumerate all the DNS records that Mesos-DNS will respond to, take a look at the DNS naming documentation

What is Marathon-LB

Marathon-LB is tool provide for containers launch via Marathon App in Mesos. LB stands for Load Balancer, which helps to dynamically add or removing containers from the load balancer running on various Mesos slaves

How Marathon-LB works?

Marathon-lb is based on HAProxy, a rapid proxy and load balancer. Real magic happens when Marathon-lb subscribes to Marathon’s event bus and updates the HAProxy configuration in real time.

It means any new container instantiates, it will add those new containers to load balancer pool automatically in a fraction of second and restart HAProxy with almost zero downtime to route traffic to new containers. Same goes, when container dies out.

Below is the architecture for Marathon-LB

Marathon Load Balancer

Marathon Load Balancer

You can read the nice documentation of Marathon-LB at Mesosphere blog.

Here below, is how Marathon-LB looks on Marathon Web UI Marathon Web UI

Next post would be more interesting about mystery solving of Azure ACS load balance app.

 

Working with Azure Container Service

However, you have wonder about the lesser known Microsoft Azure Container Service (ACS), and would be wondering about what is ACS

Here the ACS in the nutshell

  • Makes simpler to Create , Configure and Manage a cluster of machines which are preconfigured to run containerized applications
  • Uses optimized configuration of scheduling and orchestration tools
    ACS leverages Docker container format
  • Simplifies the creation of the cluster by providing ‘Wizard’ style
    rather than setting up and configuring the set of co-ordinated machines and software on top of Azure Platform
  • Supports two platforms
  • Docker Swarm to scale to hundreds or thousands of containers
  • Marathon and DC/OS
  • Built on top of Azure VM Scale Sets

What, Why and How

WhyWhatAndHow.png

What: Azure Container Service is open source container orchestration system.

Why: It is used to deploy and manage dockerize container within the data center.

How: It does orchestration either by using Docker Swarm or Apache Mesos

Where it helps

Where it helps.png

Where it helps2

Where it helps3

There are two ways to work in ACS

TwoWaysToWorkWithACS.png

Docker Swarm and Mesos are the orchestrators, which in plain English tell host system which container to host where.

You can view the animated slides at

I would be posting more information regarding this in the following post to come

 

 

Deep Dive into Sencha ExtJS Build

I you are new to ExtJS framework, I have written introductory post explaining a bit about this framework
sencha_codementor_officehour
And another one for Understanding Sencha Cmd
 This post is about the deep dive into Sencha ExtJS build with an objective

Objective

Copy the sencha application into Network UNC path
i.e. what build inside {root dir}\build\testing needs to copied to some network share (\\someserver\sharedpath)

Why

This network share in root directory of the test web server. So, whenever there is the new build, developer can push their code to test server

Problem

It looks like simple problem, you just need to change the build directory in app.json. But it didn’t work.
For this, I need to dive into Sencha build process to understand whats going. Read the full story to understand.
When you enter command
sencha app build <environment>
e.g. > sencha app build testing
This triggers lots of action. For novice new developers, it is some magic happening for the build.
My journey began with option to publish to the network share path UNC path (\\someserver\sharedpath)
 I have written another post on how to change the build directory in ExtJS
In order to this, I change the build directory for ExtJS in app.json to network share path (\\someserver\sharedpath). In my case, it’s was \\rav-vm-srv-122\dcf-test
Everything went well, except app.js. Compiler confuses with the network UNC path and instead on writing to the UNC path, it created the directory in the root directory with
\\rav-vm-srv-122\dcf-test folder name.
OrphanPartInRootDirectory
app.js file which you see here is the compile time file, which is generate when you fire sencha build. This file is different from the one you have in the ExtJS app root directory.
App root one is just 2 Kb, which compile one is close to ~ 2 MB in size.
It contains the code which you have written in controller, model, view and tons on lines from the extJS framework. This is important file, and I needed to be in there. Without this, ExtJS app won’t work
To understand, why this file landed in the root drive folder instead than network UNC path is subject of our investigation.
In order to investigate, we must dive deep into Senha build process to understand what’s going on. To understand more about the build, there is link to sencha official page, which gives you the inside of the app build process
Sencha uses Ant to build the project.
In ant, what you need to do for build is define in <target> sections. You can view https://ant.apache.org/manual/tutorial-HelloWorldWithAnt.html for more information on ant build file
There is file call build.xml at the root of the application. This file is use when you fire sencha app build command.
SenchaBuildXMLFile.png
This files imports the build-impl.xml, which is in {root}\.sencha\app folder. This is an important folder in sencha build, as most of the files you required to modified the build is stored in this location
 InsideBuildXMLFIle
For JS compilation, it uses another file called js-impl.xml.
Inside build-impl.xml file
InsideBuildXMLFIle2.png
js-impl.xml file contains the code that compiles model, view, controller js file into one file app json
Inside js-impl.xml file
JSImplXMLFile.png
Now, here was the problem. This x-compile target uses sencha command compiler which renders the output
If you see the code above, output is render to ${build.classes.file}
Now, if we run [ sencha ant .props ] command at the command prompt, it will list all the ant properties used.
CMD > sencha ant .props
Here, we get
[INF] [echoproperties] build.classes.file=D\:rav-vm-srv-122\dcf\app.js
Seen, this is a problem. Sencha compile can’t handle network UNC path. Instead, it appends network UNC path to root directory in Windows.
Now the problem has been identified, how do we solve this? My idea to resolve this is better to compile in the local directory i.e. {app.output.base} and then copy all the files to network UNC path.
Since Secnha build use ant, we could use after-build target in the build.xml file for our copy task as seen below
AfterBuildFile.png
Here, you see echo message in error. This is quirk which I had to do in order to print message to command prompt. I found, all other level info, warning, didn’t print the message in the command prompt.
I have use ant copy task, which copies the build dir at ${app.output.base} to Network UNC path \\rav-vm-srv-122\dcf-test
Hope, this post have given you some insight of Sencha build process.