How to Add Cluster Support to Node.js

What You Will Learn in This Tutorial

How to use the Node.js cluster module to take advantage of a multi-core processor in your production environment.

By nature, JavaScript is a single-threaded language. This means that when you tell JavaScript to complete a set of instructions (e.g., create a DOM element, handle a button click, or in Node.js to read a file from the file system), it handles each of those instructions one at a time, in a linear fashion.

It does this regardless of the computer it's running on. If your computer has an 8-core processor and 64GB of ram, any JavaScript code you run on that computer will run in a single thread or core.

The same rules apply in a Node.js application. Because Node.js is based on the V8 JavaScript Engine the same rules that apply to JavaScript apply to Node.js.

When you're building a web application, this can cause headaches. As your application grows in popularity (or complexity) and needs to handle more requests and additional work, if you're only relying on a single thread to handle that work, you're going to run into bottlenecks—dropped requests, unresponsive servers, or interruptions to work that was already running on the server.

Fortunately, Node.js has a workaround for this: the cluster module.

iAgdG1iJmmOSyJyg/25LjHgrQfgTn3zfk.0
Without Node.js Cluster: All requests forwarded to a single processor core.

The cluster module helps us to take advantage of the full processing power of a computer (server) by spreading out the workload of our Node.js application. For example, if we have an 8-core processor, instead of our work being isolated to just one core, we can spread it out to all eight cores.

iAgdG1iJmmOSyJyg/o6AuiaLRLVDbo4WG.0
With Node.js Cluster Support: Requests spread out across all processor cores.

Using cluster, our first core becomes the "master" and all of the additional cores become "workers." When a request comes into our application, the master process performs a round-robin style check asking "which worker can handle this request right now?" The first worker that meets the requirements gets the request. Rinse and repeat.

Setting up an example server

To get started and give us some context, we're going to set up a simple Node.js application using Express as an HTTP server. We want to create a new folder on our computer and then run:

npm init --force && npm i express

This will initialize our project using NPM—the Node.js Package Manager—and then install the express NPM package.

Be mindful of Node.js and NPM version here

For this tutorial, we're using Node.js v15.13.0 with NPM v7.7.6. Check out this tutorial on using NVM to install and manage different versions of Node.js.

After this is complete, we'll want to create an index.js file in our new project folder:

/index.js

import express from "express";

const app = express();

app.use("/", (req, res) => {
  res.send(
    `"Sometimes a slow gradual approach does more good than a large gesture." - Craig Newmark`
  );
});

app.listen(3000, () => {
  console.log("Application running on port 3000.");
});

Here, we import express from 'express' to pull express into our code. Next, we create an instance of express by calling that import as a function and assigning it to the variable app.

Next, we define a simple route at the root / of our application with app.use() and return some text to ensure things are working (this is just for show and won't have any real effect on our cluster implementation).

Finally, we call to app.listen() passing 3000 as the port (we'll be able to access the running application at http://localhost:3000 in our browser after we start the app). Though the message itself isn't terribly important, as a second argument to app.listen() we pass a callback function to log out a message when our application starts up. This will come in handy when we need to verify if our cluster support is working properly.

To make sure this all works, in your terminal, cd into the project folder and then run node index.js. If you see the following, you're all set:

$ node index.js
Application running on port 3000.

Adding Cluster support to Node.js

Now that we have our example application ready, we can start to implement cluster. The good news is that the cluster package is included in the Node.js core, so we don't need to install anything else.

To keep things clean, we're going to create a separate file for our Cluster-related code and use a callback pattern to tie it back to the rest of our code.

/cluster.js

import cluster from "cluster";
import os from "os";

export default (callback = null) => {
  const cpus = os.cpus().length;

  if (cluster.isMaster) {
    for (let i = 0; i < cpus; i++) {
      const worker = cluster.fork();

      worker.on("message", (message) => {
        console.log(`[${worker.process.pid} to MASTER]`, message);
      });
    }

    cluster.on("exit", (worker) => {
      console.warn(`[${worker.process.pid}]`, {
        message: "Process terminated. Restarting.",
      });

      cluster.fork();
    });
  } else {
    if (callback) callback();
  }
};

Starting at the top, we import two dependencies (both of which are included with Node.js and do not need to be installed separately): cluster and os. The former gives us access to the code we'll need to manage our worker cluster and the latter helps us to detect the number of CPU cores available on the computer where our code is running.

Just below our imports, next, we export the function we'll call from our main index.js file later. This function is responsible for setting up our Cluster support. As an argument, make note of our expectation of a callback function being passed. This will come in handy later.

Inside of our function, we use the aforementioned os package to communicate with the computer where our code is running. Here, we call to os.cpus().length expecting os.cpus() to return an array and then measuring the length of that array (representing the number of CPU cores on the computer).

With that number, we can set up our Cluster. All modern computers have a minimum of 2-4 cores, but keep in mind that the number of workers created on your computer will differ from what's shown below. Read: don't panic if your number is different.

/cluster.js

[...]

  if (cluster.isMaster) {
    for (let i = 0; i < cpus; i++) {
      const worker = cluster.fork();

      worker.on("message", (message) => {
        console.log(`[${worker.process.pid} to MASTER]`, message);
      });
    }

    cluster.on("exit", (worker) => {
      console.warn(`[${worker.process.pid}]`, {
        message: "Process terminated. Restarting.",
      });

      cluster.fork();
    });
  }

[...]

The first thing we need to do is to check if the running process is the master instance of our application, or, not one of the workers that we'll create next. If it is the master instance, we do a for loop for the length of the cpus array we determined in the previous step. Here, we say "for as long as the value of i (our current loop iteration) is less than the number of CPUs we have available, run the following code."

The following code is how we create our workers. For each iteration of our for loop, we create a worker instance with cluster.fork(). This forks the running master process, returning a new child or worker instance.

Next, to help us relay messages between the workers we create and our master instance, we add an event listener for the message event to the worker we created, giving it a callback function.

That callback function says "if one of the workers sends a message, relay it to the master." So, here, when a worker sends a message, this callback function handles that message in the master process (in this case, we log out the message along with the pid of the worker that sent it).

This can be confusing. Remember, a worker is a running instance of our application. So, for example, if some event happens inside of a worker (we run some background task and it fails), we need a way to know about it.

In the next section, we'll take a look at how to send messages from within a worker that will pop out at this callback function.

One more detail before we move on, though. We've added one additional event handler here, but this time, we're saying "if the cluster (meaning any of the running worker processes) receives an exit event, handle it with this callback." The "handling" part here is similar what we did before, but with a slight twist: first, we log out a message along with the worker's pid to let us know the worker died. Next, to ensure our cluster recovers (meaning we maintain the max number of running processes available to us based on our CPU), we restart the process with cluster.fork().

To be clear: we'll only call cluster.fork() like this if a process dies.

/cluster.js

import cluster from "cluster";
import os from "os";

export default (callback = null) => {
  const cpus = os.cpus().length;

  if (cluster.isMaster) {
    for (let i = 0; i < cpus; i++) {
      const worker = cluster.fork();

      // Listen for messages FROM the worker process.
      worker.on("message", (message) => {
        console.log(`[${worker.process.pid} to MASTER]`, message);
      });
    }

    cluster.on("exit", (worker) => {
      console.warn(`[${worker.process.pid}]`, {
        message: "Process terminated. Restarting.",
      });

      cluster.fork();
    });
  } else {
    if (callback) callback();
  }
};

One more detail. Finishing up with our Cluster code, at the bottom of our exported function we add an else statement to say "if this code is not being run in the master process, call the passed callback if there is one."

We need to do this because we only want our worker generation to take place inside of the master process, not any of the worker processes (otherwise we'd have an infinite loop of process creation that our computer wouldn't be thrilled about).

Putting the Node.js Cluster to use in our application

Okay, now for the easy part. With our Cluster code all set up in the other file, let's jump back to our index.js file and get everything set up:

/index.js

import express from "express";
import favicon from "serve-favicon";
import cluster from "./cluster.js";

cluster(() => {
  const app = express();

  app.use(favicon("public/favicon.ico"));

  app.use("/", (req, res) => {
    if (process.send) {
      process.send({ pid: process.pid, message: "Hello!" });
    }

    res.send(
      `"Sometimes a slow gradual approach does more good than a large gesture." - Craig Newmark`
    );
  });

  app.listen(3000, () => {
    console.log(`[${process.pid}] Application running on port 3000.`);
  });
});

We've added quite a bit here, so let's go step by step.

First, we've imported our cluster.js file up top as cluster. Next, we call that function, passing a callback function to it (this will be the value of the callback argument in the function exported by cluster.js).

Inside of that function, we've placed all of the code we wrote in index.js earlier, with a few modifications.

Immediately after we create our app instance with express(), up top you'll notice that we're calling to app.use(), passing it another call to favicon("public/favicon.ico"). favicon() is a function from the serve-favicon dependency added to the imports at the top of the file.

This is to reduce confusion. By default, when we visit our application in a browser, the browser will make two requests: one for the page and one for the app's favicon.ico file. Jumping ahead, when we call to process.send() inside of the callback for our route, we want to make sure that we don't get the request for the favicon.ico file in addition to our route.

Where this gets confusing is when we output messages from our worker. Because our route receives two requests, we'll end up getting two messages (which can look like things are broken).

To handle this, we import favicon from serve-favicon and then add a call to app.use(favicon("public/favicon.ico"));. After this is added, you should also add a public folder to the root of the project and place an empty favicon.ico file inside of that folder.

Now, when requests come into the app, we'll only get a single message as the favicon.ico request will be handled via the favicon() middleware.

Continuing on, you'll notice that we've added something above our res.send() call for our root / route:

if (process.send) {
  process.send({ pid: process.pid, message: "Hello!" });
}

This is important. When we're working with a Cluster configuration in Node.js, we need to be aware of IPC or interprocess communication. This is a term used to describe the communication—or rather, the ability to communicate—between the master instance of our app and the workers.

Here, process.send() is a way to send a message from a worker instance back to the master instance. Why is that important? Well, because worker processes are forks of the main process, we want to treat them like they're children of the master process. If something happens inside of a worker relative to the health or status of the Cluster, it's helpful to have a way to notify the master process.

Where this may get confusing is that there's no clear tell that this code is related to a worker.

What you have to remember is that a worker is just the name used to describe an additional instance of our application, or here, in simpler terms, our Express server.

When we say process here, we're referring to the current Node.js process running this code. That could be our master instance or it could be a worker instance.

What separates the two is the if (process.send) {} statement. We do this because our master instance will not have a .send() method available, only our worker instances. When we call this method, the value we pass to process.send() (here we're passing an object with a pid and message, but you can pass anything you'd like) pops out in the worker.on("message") event handler that we set up in cluster.js:

/cluster.js

worker.on("message", (message) => {
  console.log(`[${worker.process.pid} to MASTER]`, message);
});

Now this should be making a little more sense (specifically the to MASTER part). You don't have to keep this in your own code, but it helps to explain how the processes are communicating.

Running our Clustered server

Last step. To test things out, let's run our server. If everything is set up correctly, from the project folder in your terminal, run node index.js (again, be mindful of the Node.js version you're running):

$ node index.js
[25423] Application running on port 3000.
[25422] Application running on port 3000.
[25425] Application running on port 3000.
[25426] Application running on port 3000.
[25424] Application running on port 3000.
[25427] Application running on port 3000.

If everything is working, you should see something similar. The numbers on the left represent the process IDs for each instance generated, relative to the number of cores in your CPU. Here, my computer has a six-core processor, so I get six processes. If you had an eight-core processor, you'd expect to see eight processes.

Finally, now that our server is running, if we open up http://localhost:3000 in our browser and then check back in our terminal, we should see something like:

[25423] Application running on port 3000.
[25422] Application running on port 3000.
[25425] Application running on port 3000.
[25426] Application running on port 3000.
[25424] Application running on port 3000.
[25427] Application running on port 3000.
[25423 to MASTER] { pid: 25423, message: 'Hello!' }

The very last log statement is the message received in our worker.on("message") event handler, sent by our call to process.send() in the callback for our root / route handler (which is run when we visit our app at http://localhost:3000).

That's it!

Wrapping up

Above, we learned how to set up a simple Express server and convert it from a single-running Node.js process to a clustered, multi-process setup. With this, now we can scale our applications using less hardware by taking advantage of the full processing power of our server.

Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox.

No spam. Just new tutorials, course announcements, and updates from CheatCode.

Questions & Comments

Cart

Your cart is empty!

  • Subtotal

    $0.00

  • Total

    $0.00