Storia quasi vera sulla ricerca di Bithia, città antica scomparsa, nascosta, abbandonata e semi-sconosciuta della Sardegna del Sud. Città con memorie vaghe di genti dalle vesti rosse, di autoctoni e di scambi, di lingue semitiche e mescolanze. Ricchezza. Oblio. I nomi sono volutamente storpiati, gli dei pure, rimestati. La sostanza non cambia, di qua o di là del promontorio.

Microservices with Docker - part 3: linking and scaling node.js apps

In the previous part 1 of this post series, we introduced the Microservice Architecture paradigm and given a preliminary introduction to the Docker platform.
In part 2 we learned how to start using Docker, installing it on OS X embracing a pragmatic approach and listing all the basic commands to pull images, build containers and so on...

This part 3 shows a way to dockerize a node.js app from scratch running it as a microservice on a Docker container.
Finally, we will build a "complex" application made of a set of microservices linked together.

First things first: Building and Running a node.js / Express microservice in Docker

Assumptions:  we've already generated a new node.js / Express app (using express-generator, maybe) in a local directory named mynodexp. Then,  we have a package.json file in our folder.
The default Docker Machine (see docker-machine start default) must be running.

Now, we want to dockerize our app: installing all the required node packages (as described in package.json file) and running it as a container on port 3000.

Docker Hub provides an official node.js image, to start from, but we don't want to pull and manually instantiate images and containers as we did in part 2 of this series
Instead, we are going to write a Dockerfile.

A Dockerfile is a text file intended to act like a receipt for docker to about ingredients to use (images), environment to setup and commands to run in order to build and run our dockerized app as a container.

Step 1: Let's write and examine our Dockerfile, saving it on the root folder of our Express application.

At line 1, we tell Docker to pull the node.js (version 4.1.2) image from Docker Hub;
At line 4, we are specifying the maintainer of image we're going to build;
At line 7, we require to install all the node.js modules, as listed in our package.json file;
At line 10, we require to expose the port 3000, our express app will listen through that port for incoming connections;
At line 13, we are specifying how docker will run our app Container

Step 2: Building the new Image

The following shell command builds our Docker Image from the Dockerfile

docker build -t my-nodexp-app .

After the Image building process completes, you should see a terminal message like:

Successfully built 55948cdf8f3b

So, what's happened? Docker pulled the node.js Image from Docker Hub and, following the "recipe" written in the Dockerfile it built a new Image with tag my-nodexp-app.

Running the command: docker images should show a list including our Image

Step 3: Running the app as a new Container from the built Image

The following command runs a new Docker Container from the specified Image resulting in running our node.js / Express application

docker run -i -t -p 3000:3000 --rm my-nodexp-app

The --rm options tells Docker to remove the container exits; -p parameter maps the Container's exposed port to a host port. 

Now,  the docker-machine ip default command, show us the machine IP address to use, something like:

Then, opening the browser to URL should result in getting the default Express app index page. In this case: perfect! Our node.js / Express app is dockerized and running.

Building and Deploying something a bit more complex: two node.js app instances balanced by an NGINX instance

Usually, on a production environment, first steps in scaling up a node.js application involve deploying and running several application instances (on a multi-core server or spawned across several machines). Then, a reverse proxy or load balancing server is used as a "frontend" to receive all the incoming requests and to redirect them to a particular application backend node. For this aim, NGINX is the server I love and prefer.
The following picture summarizes this concept:

(Fig. 2: Scaling a node.js app using a reverse proxy / load balancer)

Thinking in terms of microservices, the n application instances in the picture above can be imagined as if each instance is dedicated to a particular app functionality or API. For example, app #1 could be a microservice for users management, node #2 a service for sending emails, and so on...
In our simple example, each node exposes the same functionality, namely the entire application, just replicating its functionalities to scale up.

So, we need to approach to Linking Docker Containers concept.

Docker run command allows to link together containers through the --link option.
For example we can link an hypothetical node-based node-app Container to our MongoDB mongo:mongo Container, using a command like this:

docker run -d --name my-node -p 8080 --link mongo:mongo node-app

Returning to our architecture shown in Fig.2, let's consider only two node.js app instances, so we need to:

1) build our application Docker Image (as done before in this article)
2) run two separated node.js Containers from that Image
3) build, configure and run a NGINX Container to serve as reverse-proxy to our nodes
4) run the whole Docker-based application linking containers together

Step 1: Building the node.js app Image

First of all, in our express application, let's edit as follows the router/index.js file (edits in bold):

var express = require('express');
var router = express.Router();

/* GET home page. */
router.get('/', function(req, res, next) {
  res.render('index', { title: 'Express', port:'port')   });

module.exports = router;

Also, edit the views/index.jade file (edits in bold):

extends layout

block content
  h1= title
  p Welcome to #{title}
  p Serving node is at: #{port}

Now, rebuild our Image as done ad the beginning of this article:

docker build -t my-nodexp-app .

Step 2: Running two node.js app Containers

Run two Containers based upon the Image built on the previous step. We will use two different names here, please be careful because naming is very important in linking containers together in step 4.

So, launch:

docker run -d --name node1 -p 3000 my-nodexp-app

docker run -d --name node2 -p 3000 my-nodexp-app

Two things noteworthy are: we're using the -d option to tell Docker to demonize our running app. Second, we aren't specifying a mapping port for the host.

Running  the docker ps command should show our running node.js containers, as in the following picture:

Step 3: build, configure and run a NGINX Container to serve as reverse-proxy to our nodes

We need an NGINX container to act as reverse-proxy / load balancer to our nodes, listening to port 80.
NGINX server is configured through a nginx.conf file. Create a new file with than name and, for our aims, a suitable content could be as the following:

Just notice at rows 10 and 11 how we configured our nodes' name for NGINX.

It's time to build our NGINX Image based upon our configuration. For that we need to write another Dockerfile as a recipe for our new Image.
So, create a Dockerfile.nginx with content:

It’s time to build our NGINX Docker Image, as always running the command:

 docker build -f Dockerfile.nginx -t my-nginx .

Then, the final step:

Step 4: running the NGINX Container from the previous built Image linking it to the two node.js containers.

Easy task:

docker run -d --name nginx -p 80:80 --link node1:node1 --link node2:node2 my-nginx

In bold the --link options, which reflect our nginx configuration.

The docker ps commands should return our running Containers. Something like in the following output:

Finally, our balanced, two-nodes based, dockerized app is complete.

Assuming the Docker machine IP address didn't change for the default machine, pointing a browser to URL:

should show us the Express app index page. 
Under the hood, NGINX server is receiving all the incoming HTTP requests listening at port 80, then it forwards them choosing one of the two linked node.js-based containers.

That's all. To stop your running Containers, just use the docker stop command:

docker stop nginx node1 node2

Linking Containers with docker run --link... becomes quickly quite uncomfortable if you are going to deploy and connect several containers in a production environment.
In the next post of this article series we will learn how to link Docker Containers together using Docker Compose.

Stay tuned!

Cagliari: a portrait

Just completed a portrait of my city.
You can find it on my VSCO Journal.

Thank you Pretziada for the support.

Microservices with Docker - part 2

A gentle, pragmatic introduction to Microservice Architecture and Docker.

In the previous part 1 of this post series, we introduced the Microservice Architecture paradigm and given a preliminary introduction to the Docker platform.

In this part 2 we are going to learn how to start using Docker, installing it on OS X embracing a pragmatic approach.

Running Docker, pulling Images and running Containers, etc… requires a Linux machine.
On Mac OS X we need a Virtual Machine.
Starting from version 1.8 Docker deprecated the old Boot2Docker command line tool in favor of the new Docker Machine. Then, they provide an handy Docker Toolbox to install Docker Machine as well as the other Docker tools.

Just follow the documentation to install the Docker Toolbox and to run it (on OS X).

When the Docker Machine (which creates Docker hosts) is running on your Mac, you’re ready to interact with Docker through the Docker Client command line interface (docker).

After installing Docker Toolbox, run Docker Machine from Applications/Docker choosing Docker Quickstart

After some init operations you should see something like this on a Terminal window...

Basically, we’ve just created a Docker Machine and now it is ready to accept commands through our Docker Client.

A Docker Machine lets you create Docker Hosts on your computer (or on cloud). It automatically creates hosts, installs Docker on them, then configures the Docker Client to talk to them. A machine is the combination of a Docker host and a configured Docker Client.

Docker Machine provides several commands for managing them. Using these commands you can:

- start, stop, and restart a host
- upgrade the Docker Client and daemon
- configure a Docker client to talk to your Docker Host
- etc...

Summarizing: our previous step created a new Docker Host on OS X, run a new Docker machine called default and configured a Docker Client. The following picture shows our Docker architecture on OS X.

The configured Docker system on Mac OS X

We're ready to explore some essential Docker commands.

Basic Docker Machine commands

$ docker-machine --help
Prints the command general help.

$ docker-machine ls
Shows all the existing Docker machines.

$ docker-machine start <machine>
$ docker-machine stop <machine>
Start and Stop machines. E.g.,  $ docker-machine start default

$ docker-machine ip <machine>
Get the IP address of a machine.

For a comparison between old (deprecated) Boot2Docker commands and Docker Machine please refer to this documentation.

Now that we have a Docker host running with a default machine we’re ready to know how to use the Docker Client command line interface.

Common Docker Commands

$ docker --help
$ docker <CMD> --help

it prints the help, where CMD is a specific docker command, like run

$ docker pull <IMAGE_NAME>
Downloads an Image from the Docker Hub repository given its name.

$ docker run <IMAGE_NAME>
Downloads locally, if not already done, an Image from repository and runs a Container based on it.


$ docker run -i -t ubuntu

Runs an Ubuntu container:
-i : interactive connection grabbing STDIN
-t: pseudo tty-terminal

$ docker run -i -t -p 8080:3000 node-express
Runs an Image as a Container, mapping internal 3000 port to 8080 as visible by external requests.

$ docker ps -a
Lists all Containers (not only running Containers)

$ docker images
Lists all downloaded Images

$ docker commit -a "Your Name <>" -m "ubuntu and node" CONTAINER_ID node-ubuntu:0.1
Creates a new image from the container given its CONTAINER_ID, tagging it with a version.

$ docker tag node-ubuntu:0.1 node-ubuntu:latest
Adds the latest tag to the existing image node-ubuntu
When a docker run command is launched on that Image without a version, the latest tagged version is used.

$ docker rm <YOUR_CONTAINER_ID>
Removes a Container

$ docker rmi <YOUR_IMAGE_ID>
Removes an Image

Example n.1: Running the hello-world container

The “hello-world” is a Docker Image, thus published on the Docker Hub, in order to explain how Docker works. Simply, it allows to run a container which prints some useful information.

To download the Image and run a Container based on that, simply run the command:

$ docker run hello-world

Resulting in:

Reading the printed message on the Terminal show you what’s happened.

Now, running the command:

$ docker ps -a
it will show you all the containers, running or not, on your Docker Host and you should be able to find the hello-world Container and its current (exited) status.

Running the command:

$ docker images
it will show you all the locally downloaded Images, including the hello-world one.

A man and his rocks

Solid Landscapes.
Sulcis, Sardinia.

Many thanks to Ivano & Kyre of Pretziada (visit and follow their amazing project!)

Microservices with Docker - part 1

A gentle, pragmatic introduction to Microservice Architecture and Docker.

In this part 1 of a post series, we are going to give a brief introduction to Microservices and to the Docker platform.

Microservice Architecture

Microservice architecture is a way of designing software applications as composed of modules of independently deployable services.
Thus, an application following the Microservices paradigm is built of small services, each running its own process and communicating through some APIs, HTTP, or other.
A Microservice Architecture-based  application is opposite to “old-style” Monolithic applications.

(Picture from M. Fowler, Microservices)

As intuitable, the basic components in a Microservice Architecture are represented by services, which are independently deployable software components. A service is somewhat different from a library. Examples of services in a Microservice Architecture application can be: an authentication deployed module, an API part dedicated to handle payments, a user database service, ...

For example, as stated in M. Fowler, Microservices, if you have a classic application depending on several libraries:

“a change to any single component results in having to redeploy the entire application. But if that application is decomposed into multiple services, you can expect many single service changes to only require that service to be redeployed.”

Microservices aren’t a relative new concept, but they’re quickly becoming popular, facing common architectural issues about development, deployment and scaling of (simple to) complex modern web applications. Microservice Architecture paradigm is platform and programming languages agnostic. For example, Docker could be a suitable platform to design, develop, deploy and run Microservices on a controlled, comfortable environment. But, also you can build your Microservices implementation using other tools or platforms or by building your infrastructure as well.

What is Docker?

As described on the Docker website, Docker is:

“a platform for developers and sysadmins to develop, ship, and run applications. Docker lets you quickly assemble applications from components and eliminates the friction that can come when shipping code. Docker lets you get your code tested and deployed into production as fast as possible.”

Screen Shot 2015-04-13 at 11.09.22.png

Classic Virtual Machines vs. Docker (Picture from Docker docs)

Thus, why Docker?

  • - to build and scale Microservice Architecture-based applications
  • - to isolate and independently develop Microservices
  • - to simplify a development configuration/environment
  • - to share applications and their environment with other developers (in the same team or not)
  • ...
Basically, Docker technology consists of:

The Docker Engine

A lightweight and powerful container virtualization technology combined with a work flow for building and containerizing applications. The Docker Engine daemon is the Docker component which builds, runs and distributes the Docker Containers.

The Docker Hub

A service for sharing and managing application images. It is a repository which holds usable Docker Images. You can use the Docker registry for free or under a fee and have open-source public  images and private Images.

The Docker Client

The Docker client connects to the Docker Engine Daemon (local or remotely) and accepts all the Docker commands, like docker pull or docker run. 


Images are templates for Docker applications. An Image it’s like a composable component to use to create application stacks. For example, can be an Ubuntu image or a Redis image based on Ubuntu, etc…


A Container is created from a Docker Image. Think about it as an instantiation (a Container, a running instance) from a class (an image) in the OOP paradigm. Containers can be started, stopped, etc…

The Docker Architecture overview (from Docker website)

On a Linux-based host, a Docker environment can be summarized by the following picture. We run a Docker Engine to build and run our Containers - created from Docker Images - and managing the Docker environment through the Docker client command.

A Docker environment in case of a Linux host

So, for example, with Docker we can quickly build and deploy a microservices-based application which uses NGINX and node.js Images, pulling them from the Docker Hub and instantiating and running them as Containers. We can run our code on the second, node.js Container and link the NGINX container to serve as reverse proxy for the entire Web app. All of them as Microservices running on separate modules.
In the (coming soon) part 2 of this article series we will describe how to pragmatically use Docker and its commands to pull Images, to run Containers and to manage them. Where, on a future post in this series we will learn how configure and run a dockerized node.js/express application.

Stay tuned!

The Thin Ice

The Path to Roots

Toward Barrancu Mannu, Santadi, Sulcis, Sardinia.
Bronze Age, ca. 1300 BC

All photos: iPhone 6 + VSCO Cam

Lisboa vs. Vienna in two weeks

Lisboa, Lisbon, Lisbona.
Rainha do Mar.

...and then, Vienna: majestic.