How to Increase Performance Of Nodejs application

How to Increase Performance Of Nodejs application

  • 2016-09-22
  • 2386

Node.js is focused on being the best way to write highly performant web applications. To understand how it achieves this, we need to know about the I/O scaling problem. Let us look at a rough estimate of the speed at which we can access data from various sources in terms of CPU cycles. In this article you will learn . How to Increase Performance Of Nodejs application

Comparing common I/O sources

You can clearly see that Disk and Network access is in a completely different category from accessing data that is available in RAM and CPU cache.

Most web applications depend on reading data from disk or from another network source (for example, a database query). When an HTTP request is received and we need to load data from a database, typically this request will be spent waiting for a disk read or a network access call to complete.

These open connections and pending requests consume server resources (memory and CPU). In order to handle a large number of requests from different clients using the same web server, we have the I/O scaling problem.

Traditional Web Servers Using a Process Per Request

Traditional servers used to spin up a new process to handle every single web request. Spinning a new process for each request is an expensive operation, both in terms of CPU and memory. This is the way technologies like PHP used to work when they were first created.

In order to successfully reply to an HTTP request “A,” we need some data from a database. This read can potentially take a long time. For this entire read duration, we will have a process taking up CPU and memory while idling and waiting for the database response. Also, processes are slow to start and have a significant overhead in terms of RAM space. This does not scale for very long and that is the reason why modern web applications use a thread pool.

Modern servers use a thread from a thread pool to serve each request. Since we already have a few Operating System (OS) threads created (hence a thread pool), we do not pay the penalty of starting and stopping OS processes (which are expensive to create and take up much more memory than a thread). When a request comes in, we assign a thread to process this request. This thread is reserved for the request in the entire duration that the request is being handled,  Because we save the overhead of creating a new process every time and the threads are lighter than processes, this method is much better than the original server design. Most web servers used this method a few years back and many continue to use today. However, this method is not without drawbacks. Again there is wasting of RAM between threads. Also the OS needs to context switch between threads (even when they are idle), and this results in wasted CPU resources.

server1

The Nginx Way

We have seen that creating separate processes and separate threads to handle requests results in wasted OS resources. The way Node.js works is that there is a single thread handling requests. The idea that a single threaded server can perform better than a thread pool server is not new to Node.js. Nginx is built on this principle.

Nginx is a single-threaded web server and can handle a tremendous amount of concurrent requests. A simple benchmark comparing Nginx to Apache, both serving a single static file from the file system, is shown in

server2

Nginx vs. Apache requests/second vs. concurrent open connections

As you can see, when the number of concurrent connections goes up, Nginx can handle a lot more requests per second than Apache. What is more interesting is the memory consumption, as shown

server3

Nginx vs. Apache memory usage vs. concurrent connections

With more concurrent connections, Apache needs to manage more threads and therefore consumes much more memory, whereas Nginx stays at a steady level.

Node.js Performance Secret

There is a single execution thread in JavaScript. This is the way web browsers have traditionally worked. If you have a long-running operation (such as waiting for a timer to complete or a database query to return), you must continue operation using a callback. provides a simple demo that uses the JavaScript runtime setTimeout function to simulate a long-running operation. You can run this code using Node.js.

simulateUserClick.js

function longRunningOperation(callback) {
    // simulate a 3 second operation
    setTimeout(callback, 3000);
}

function userClicked() {
    console.log('starting a long operation');
    longRunningOperation(function () {
        console.log('ending a long operation');
    });
}
// simulate a user action
userClicked();

This simulation is possible in JavaScript because we have first-class functions and passing functions—a callback is a well-supported pattern in the language. Things become interesting when you combine first-class functions with the concept of closures. Let us image that we are handling a web request and we have a long-running operation such as a database query that we need to do. A simulated version is shown

simulateWebRequest.js

function longRunningOperation(callback) {
    // simulate a 3 second operation
    setTimeout(callback, 3000);
}

function webRequest(request) {
    console.log('starting a long operation for request:', request.id);
    longRunningOperation(function () {
        console.log('ending a long operation for request:', request.id);
    });
}
// simulate a web request
webRequest({ id: 1 });
// simulate a second web request
webRequest({ id: 2 });

because of closures, we have access to the correct user request after the long-running operation completes. We just handled two requests on a single thread without breaking a sweat. Now you should understand the following statement: “Node.js is highly performant, and it uses JavaScript because JavaScript supports first-class functions and closures.”

The immediate question that should come to mind when someone tells you that you only have a single thread to handle requests is, “But my computer has a quad core CPU. Using only a single thread will surely waste resources.” And the answer is that yes it will. However, there is a well-supported way around it that we will examine in when discussing deployment and scalability. Just a quick tip about what you will see there: It is actually really simple to use all the CPU cores with a separate JavaScript process for each CPU core using Node.js.

It is also important to note that there are threads managed by Node.js at the C level (such as for certain file system operations), but all the JavaScript executes in a single thread. This gives you the performance advantage of the JavaScript almost completely owning at least one thread.

More Node.js Internals

It is not terribly important to understand the internals of how Node.js works, but a bit more discussion make you more aware of the terminology when you discuss Node.js with your peers. At the heart of Node.js is an event loop.

Event loops enable any GUI application to work on any operating system. The OS calls a function within your application when something happens (for example, the user clicks a button), and then your application executes the logic contained inside this function to completion. Afterward, your application is ready to respond to new events that might have already arrived (and are there on the queue) or that might arrive later (based on user interaction).

Thread Starvation

Generally during the duration of a function called from an event in a GUI application, no other events can be processed. Consequently, if you do a long-running task within something like a click handler, the GUI will become unresponsive. This is something every computer user I have met has experienced at one point or another. This lack of availability of CPU resources is called starvation.

Node.js is built on the same event loop principle as you find in GUI programs. Therefore, it too can suffer from starvation. To understand it better, let’s go through a few code examples shows a small snippet of code that measures the time passed using console.time andconsole.timeEnd functions.

timeit.js

console.time('timer');
setTimeout(function(){
   console.timeEnd('timer');
},1000)

If you run this code, you should see a number quite close to what you would expect—in other words, 1000ms. This callback for the timeout is called from the Node.js event loop.

Now let’s write some code that takes a long time to execute, for instance, a nonoptimized method of calculating the nth Fibonacci number as shown in

largeOperation.js

console.time('timeit');
function fibonacci(n) {
    if (n < 2)
        return 1;
    elses
        return fibonacci(n - 2) + fibonacci(n - 1);
}
fibonacci(44);             // modify this number based on your system performance
console.timeEnd('timeit'); // On my system it takes about 9000ms (i.e. 9 seconds)

Now we have an event that can be raised from the Node.js event loop (setTimeout) and a function that can keep the JavaScript thread busy (fibonacci). We can now demonstrate starvation in Node.js. Let’s set up a time-out to execute. But before this time-out completes, we execute a function that takes a lot of CPU time and therefore holds up the CPU and the JavaScript thread. As this function is holding on to the JavaScript thread, the event loop cannot call anything else and therefore the time-out is delayed, as demonstrated in

starveit.js

// utility funcion
function fibonacci(n) {
    if (n < 2)
        return 1;
    else
        return fibonacci(n - 2) + fibonacci(n - 1);
}

// setup the timer
console.time('timer');
setTimeout(function () {
    console.timeEnd('timer'); // Prints much more than 1000ms
}, 1000)

// Start the long running operation
fibonacci(44);

One lesson here is that Node.js is not the best option if you have a high CPU task that you need to do on a client request in a multiclient server environment. However, if this is the case, you will be very hard-pressed to find a scalable software solution in any platform. Most high CPU tasks should take place offline and are generally offloaded to a database server using things such as materialized views, map reduce, and so on. Most web applications access the results of these computations over the network, and this is where Node.js shines—evented network I/O.

Now that you understand what an event loop means and the implications of the fact that JavaScript portion of Node.js is single-threaded, let’s take another look at why Node.js is great for I/O applications.

Data-Intensive Applications

Node.js is great for data-intensive applications. As we have seen, using a single thread means that Node.js has an extremely low-memory footprint when used as a web server and can potentially serve a lot more requests. Consider the simple scenario of a data intensive application that serves a dataset from a database to clients via HTTP. We know that gathering the data needed to respond to the client query takes a long time compared to executing code and/or reading data from RAM. shows how a traditional web server with a thread pool would look while it is responding to just two requests.

server4
How a traditional server handles two requests

The same server in Node.js is shown in All the work is going to be inside a single thread, which results in lesser memory consumption and, due to the lack of thread context switching, lesser CPU load. Implementation-wise, the handleClientRequest is a simple function that calls out to the database (using a callback). When that callback returns, it completes the request using the request object it captured with a JavaScript closure. This is shown in the pseudocode

How a Node.js server handles two requests

** handleClientRequest.js**

function handleClientRequest(request) {
    makeDbCall(request.someInfo, function (result) {
        // The request corresponds to the correct db result because of closure
        request.complete(result);
    });
}

Note that the HTTP request to the database is also managed by the event loop. The advantage of having async IO and why JavaScript + Node.js is a great fit for data-intensive applications should now be clear.

The V8 JavaScript Engine

It is worth mentioning that all the JavaScript inside Node.js is executed by the V8 JavaScript engine. V8 came into being with the Google Chrome project. V8 is the part of Chrome that runs the JavaScript when you visit a web page.

Anybody who has done any web development knows how amazing Google Chrome has been for the web. The browser usage statistics reflect that quite clearly. According to w3schools.org (www.w3schools.com/browsers/browsers_stats.asp), nearly 56% of Internet users who visit their web site are now using Google Chrome. There are lots of reasons for this, but V8 and its speed is a very important factor. Besides speed, another reason for using V8 is that the Google engineers made it easy to integrate into other projects, and that it is platform independent.

Source via: dunebook.com

Suggest

The Complete Node JS Developer Course

Learn and Understand NodeJS

Build an Amazon clone: Nodejs + MongoDB + Stripe Payment

Angular 2 and NodeJS - The Practical Guide to MEAN Stack 2.0

Intro To iOS Backend Development: Image Uploads