Microservices with Docker - part 3: linking and scaling node.js apps

Tuesday, October 13, 2015

In the previous part 1 of this post series, we introduced the Microservice Architecture paradigm and given a preliminary introduction to the Docker platform.
In part 2 we learned how to start using Docker, installing it on OS X embracing a pragmatic approach and listing all the basic commands to pull images, build containers and so on...

This part 3 shows a way to dockerize a node.js app from scratch running it as a microservice on a Docker container.
Finally, we will build a "complex" application made of a set of microservices linked together.



First things first: Building and Running a node.js / Express microservice in Docker

Assumptions:  we've already generated a new node.js / Express app (using express-generator, maybe) in a local directory named mynodexp. Then,  we have a package.json file in our folder.
The default Docker Machine (see docker-machine start default) must be running.

Now, we want to dockerize our app: installing all the required node packages (as described in package.json file) and running it as a container on port 3000.

Docker Hub provides an official node.js image, to start from, but we don't want to pull and manually instantiate images and containers as we did in part 2 of this series
Instead, we are going to write a Dockerfile.

A Dockerfile is a text file intended to act like a receipt for docker to about ingredients to use (images), environment to setup and commands to run in order to build and run our dockerized app as a container.


Step 1: Let's write and examine our Dockerfile, saving it on the root folder of our Express application.


At line 1, we tell Docker to pull the node.js (version 4.1.2) image from Docker Hub;
At line 4, we are specifying the maintainer of image we're going to build;
At line 7, we require to install all the node.js modules, as listed in our package.json file;
At line 10, we require to expose the port 3000, our express app will listen through that port for incoming connections;
At line 13, we are specifying how docker will run our app Container


Step 2: Building the new Image

The following shell command builds our Docker Image from the Dockerfile


docker build -t my-nodexp-app .



After the Image building process completes, you should see a terminal message like:

Successfully built 55948cdf8f3b

So, what's happened? Docker pulled the node.js Image from Docker Hub and, following the "recipe" written in the Dockerfile it built a new Image with tag my-nodexp-app.

Running the command: docker images should show a list including our Image





Step 3: Running the app as a new Container from the built Image

The following command runs a new Docker Container from the specified Image resulting in running our node.js / Express application


docker run -i -t -p 3000:3000 --rm my-nodexp-app


The --rm options tells Docker to remove the container exits; -p parameter maps the Container's exposed port to a host port. 

Now,  the docker-machine ip default command, show us the machine IP address to use, something like: 192.168.99.100

Then, opening the browser to URL http://192.168.99.100:3000 should result in getting the default Express app index page. In this case: perfect! Our node.js / Express app is dockerized and running.



Building and Deploying something a bit more complex: two node.js app instances balanced by an NGINX instance


Usually, on a production environment, first steps in scaling up a node.js application involve deploying and running several application instances (on a multi-core server or spawned across several machines). Then, a reverse proxy or load balancing server is used as a "frontend" to receive all the incoming requests and to redirect them to a particular application backend node. For this aim, NGINX is the server I love and prefer.
The following picture summarizes this concept:


(Fig. 2: Scaling a node.js app using a reverse proxy / load balancer)


Thinking in terms of microservices, the n application instances in the picture above can be imagined as if each instance is dedicated to a particular app functionality or API. For example, app #1 could be a microservice for users management, node #2 a service for sending emails, and so on...
In our simple example, each node exposes the same functionality, namely the entire application, just replicating its functionalities to scale up.

So, we need to approach to Linking Docker Containers concept.

Docker run command allows to link together containers through the --link option.
For example we can link an hypothetical node-based node-app Container to our MongoDB mongo:mongo Container, using a command like this:

docker run -d --name my-node -p 8080 --link mongo:mongo node-app


Returning to our architecture shown in Fig.2, let's consider only two node.js app instances, so we need to:

1) build our application Docker Image (as done before in this article)
2) run two separated node.js Containers from that Image
3) build, configure and run a NGINX Container to serve as reverse-proxy to our nodes
4) run the whole Docker-based application linking containers together


Step 1: Building the node.js app Image

First of all, in our express application, let's edit as follows the router/index.js file (edits in bold):

var express = require('express');
var router = express.Router();

/* GET home page. */
router.get('/', function(req, res, next) {
  res.render('index', { title: 'Express', port: req.app.get('port')   });
});

module.exports = router;


Also, edit the views/index.jade file (edits in bold):


extends layout

block content
  h1= title
  p Welcome to #{title}
  p Serving node is at: #{port}



Now, rebuild our Image as done ad the beginning of this article:

docker build -t my-nodexp-app .



Step 2: Running two node.js app Containers

Run two Containers based upon the Image built on the previous step. We will use two different names here, please be careful because naming is very important in linking containers together in step 4.

So, launch:

docker run -d --name node1 -p 3000 my-nodexp-app

docker run -d --name node2 -p 3000 my-nodexp-app


Two things noteworthy are: we're using the -d option to tell Docker to demonize our running app. Second, we aren't specifying a mapping port for the host.

Running  the docker ps command should show our running node.js containers, as in the following picture:






Step 3: build, configure and run a NGINX Container to serve as reverse-proxy to our nodes


We need an NGINX container to act as reverse-proxy / load balancer to our nodes, listening to port 80.
NGINX server is configured through a nginx.conf file. Create a new file with than name and, for our aims, a suitable content could be as the following:


Just notice at rows 10 and 11 how we configured our nodes' name for NGINX.

It's time to build our NGINX Image based upon our configuration. For that we need to write another Dockerfile as a recipe for our new Image.
So, create a Dockerfile.nginx with content:

It’s time to build our NGINX Docker Image, as always running the command:

 docker build -f Dockerfile.nginx -t my-nginx .


Then, the final step:


Step 4: running the NGINX Container from the previous built Image linking it to the two node.js containers.

Easy task:

docker run -d --name nginx -p 80:80 --link node1:node1 --link node2:node2 my-nginx


In bold the --link options, which reflect our nginx configuration.

The docker ps commands should return our running Containers. Something like in the following output:



Finally, our balanced, two-nodes based, dockerized app is complete.

Assuming the Docker machine IP address didn't change for the default machine, pointing a browser to URL:

http://192.168.99.100

should show us the Express app index page. 
Under the hood, NGINX server is receiving all the incoming HTTP requests listening at port 80, then it forwards them choosing one of the two linked node.js-based containers.


That's all. To stop your running Containers, just use the docker stop command:

docker stop nginx node1 node2


Linking Containers with docker run --link... becomes quickly quite uncomfortable if you are going to deploy and connect several containers in a production environment.
In the next post of this article series we will learn how to link Docker Containers together using Docker Compose.


Stay tuned!

You Might Also Like

0 commenti

Subscribe