Download PDF Node js design patterns master best practices to build modular and scalable server side
Node js Design Patterns Master best practices to build modular and scalable server side web applications 2nd Edition Casciaro
Visit to download the full and correct content document: https://textbookfull.com/product/node-js-design-patterns-master-best-practices-to-buil d-modular-and-scalable-server-side-web-applications-2nd-edition-casciaro/
More products digital (pdf, epub, mobi) instant download maybe you interests ...
Node js web development server side development with Node 10 made easy Fourth Edition. Edition David Herron
RESTful Web API Design with Node js 10 Learn to create robust RESTful web services with Node js MongoDB and Express js 3rd Edition English Edition Valentin Bojinov
Node js MongoDB and Angular Web Development The definitive guide to using the MEAN stack to build web applications Developer s Library 2nd Edition Brad Dayley & Brendan Dayley & Caleb Dayley
Some principles and design patterns literally define the Node.js platform and its ecosystem; the most peculiar ones are probably its asynchronous nature and its programming style that makes heavy use of callbacks. It's important that we first dive into these fundamental principles and patterns, not only for writing correct code, but also to be able to take effective design decisions when it comes to solving bigger and more complex problems.
Another aspect that characterizes Node.js is its philosophy. Approaching Node.js is in fact way more than simply learning a new technology; it's also embracing a culture and a community. We will see how this greatly influences the way we design our applications and components, and the way they interact with those created by the community.
In addition to these aspects it's worth knowing that the latest versions of Node.js introduced support for many of the features described by ES2015, which makes the language even more expressive and enjoyable to use. It is important to embrace these new syntactical and functional additions to the language to be able to produce more concise and readable code and come up with alternative ways to implement the design patterns that we are going to see throughout this book.
In this chapter, we will learn the following topics:
The Node.js philosophy, the Node way Node 5 and ES2015
The reactor pattern: the mechanism at the heart of the Node.js asynchronous architecture
The Node.js philosophy
Every platform has its own philosophy-a set of principles and guidelines that are generally accepted by the community, an ideology of doing things that influences the evolution of a platform, and how applications are developed and designed. Some of these principles arise from the technology itself, some of them are enabled by its ecosystem, some are just trends in the community, and others are evolutions of different ideologies. In Node.js, some of these principles come directly from its creator, Ryan Dahl, from all the people who contributed to the core, from charismatic figures in the community, and some of the principles are inherited from the JavaScript culture or are influenced by the Unix philosophy.
None of these rules are imposed and they should always be applied with common sense; however, they can prove to be tremendously useful when we are looking for a source of inspiration while designing our programs.
You can find an extensive list of software development philosophies in Wikipedia at http://en.wikipedia.org/wiki/List_of_software_development _philosophies
Small core
The Node.js core itself has its foundations built on a few principles; one of these is, having the smallest set of functionality, leaving the rest to the so-called userland (or userspace),the ecosystem of modules living outside the core. This principle has an enormous impact on the Node.js culture, as it gives freedom to the community to experiment and iterate fast on a broader set of solutions within the scope of the userland modules, instead of being imposed with one slowly evolving solution that is built into the more tightly controlled and stable core. Keeping the core set of functionality to the bare minimum then, not only becomes convenient in terms of maintainability, but also in terms of the positive cultural impact that it brings on the evolution of the entire ecosystem.
Small modules
Node.js uses the concept of module as a fundamental mean to structure the code of a program. It is the brick for creating applications and reusable libraries called packages (a package is also frequently referred to as just module; since, usually it has one single module as an entry point). In Node.js, one of the most evangelized principles is to design small modules, not only in terms of code size, but most importantly in terms of scope.
This principle has its roots in the Unix philosophy, particularly in two of its precepts, which are as follows:
Small is beautiful. Make each program do one thing well.
Node.js brought these concepts to a whole new level. Along with the help of npm, the official package manager, Node.js helps solving the dependency hell problem by making sure that each installed package will have its own separate set of dependencies, thus enabling a program to depend on a lot of packages without incurring in conflicts. The Node way, in fact, involves extreme levels of reusability, whereby applications are composed of a highnumber of small, well-focused dependencies. While this can be considered unpractical or even totally unfeasible in other platforms, in Node.js this practice is encouraged. As a consequence, it is not rare to find npm packages containing less than 100 lines of code or exposing only one single function.
Besides the clear advantage in terms of reusability, a small module is also considered to be the following:
Easier to understand and use
Simpler to test and maintain
Perfect to share with the browser
Having smaller and more focused modules empowers everyone to share or reuse even the smallest piece of code; it's the Don't Repeat Yourself (DRY) principle applied at a whole new level.
Small surface area
In addition to being small in size and scope, Node.js modules usually also have the characteristic of exposing only a minimal set of functionality. The main advantage here is an increased usability of the API, which means that the API becomes clearer to use and is less exposed to erroneous usage. Most of the time, in fact, the user of a component is interested only in a very limited and focused set of features, without the need to extend its functionality or tap into more advanced aspects.
In Node.js, a very common pattern for defining modules is to expose only one piece of functionality, such as a function or a constructor, while letting more advanced aspects or secondary features become properties of the exported function or constructor. This helps the user to identify what is important and what is secondary. It is not rare to find modules that expose only one function and nothing else, for the simple fact that it provides a single,
unmistakably clear entry point.
Another characteristic of many Node.js modules is the fact that they are created to be used rather than extended. Locking down the internals of a module by forbidding any possibility of an extension might sound inflexible, but it actually has the advantage of reducing the use cases, simplifying its implementation, facilitating its maintenance, and increasing its usability.
Simplicity and pragmatism
Have you ever heard of the Keep It Simple, Stupid (KISS) principle? Or the famous quote:
Simplicity is the ultimate sophistication.
Leonardo da Vinci
Richard P. Gabriel, a prominent computer scientist coined the term worse is better to describe the model, whereby less and simpler functionality is a good design choice for software. In his essay,The rise of worse is better, he says:
The design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.
Designing a simple, as opposed to a perfect, feature-full software, is a good practice for several reasons: it takes less effort to implement, allows faster shipping with less resources, is easier to adapt, and is easier to maintain and understand. These factors foster the community contributions and allow the software itself to grow and improve.
In Node.js, this principle is also enabled by JavaScript, which is a very pragmatic language. It's not rare, in fact, to see simple functions, closures, and object literals replacing complex class hierarchies. Pure object-oriented designs often try to replicate the real world using the mathematical terms of a computer system without considering the imperfection and the complexity of the real world itself. The truth is that our software is always an approximation of the reality and we would probably have more success in trying to get something working sooner and with reasonable complexity, instead of trying to create a near-perfect software with a huge effort and tons of code to maintain.
Throughout this book, we will see this principle in action many times. For example, a considerable number of traditional design patterns, such as Singleton or Decorator can have a trivial, even if sometimes not foolproof implementation and we will see how an uncomplicated, practical approach most of the time is preferred to a pure, flawless design.
Introduction to Node 5 and ES2015
At the time of writing the most recent major versions of Node.js (Node 4 and Node 5) come with a great addition to the language: a wide support for many of the new features introduced in the ECMAScript 2015 specification (in short ES2015 and formerly known also as ES6) which aims to make the language even more flexible and enjoyable.
Throughout this book, we will widely adopt some of these new features in the code examples. These concepts are still fresh within the Node.js community so it's worth having a quick look at the most important ES2015 specific features currently supported in Node.js. Our version of reference is Node 5, more specifically version 5.1.
Many of these features will work correctly only when the strict mode is enabled. Strict mode can be easily enabled by adding ause strict; statement at the very beginning of your script. For the sake of brevity, we will not write this line in our code examples but you should remember to add it to be able to run them correctly.
The following list is not meant to be exhaustive but just an introduction to the ES2015 features supported in Node, so that you can easily understand all the code examples in the rest of this book.
Arrow Function
One of the most appreciated features introduced by ES2015 is the support for arrow functions. Arrow function is a more concise syntax for defining functions, especially useful when defining a callback. To better understand the advantages of this syntax let's see first an example of a classic filtering on an array:
var numbers = [2, 6, 7, 8, 1];
var even = numbers.filter(function(x) { return x%2 === 0; });
This code above can be rewritten as follows using the arrow function syntax:
var numbers = [2, 6, 7, 8, 1];
var even = numbers.filter((x) => x%2 === 0);
The filter function can be defined inline, the keyword function is removed, leaving only the list of parameters, which is followed by => (the arrow), which in turn is followed by the body of the function. When the body of the function is just one line there's no need to write the return keyword as it is applied implicitly. If we need to add more lines of code to the body of the function, we can wrap them in curly brackets, but beware that in this case the
return is not automatically implied, so it needs to be stated explicitly, as in the following example:
var numbers = [2, 6, 7, 8, 1]; var even = numbers.filter((x) => { if (x%2 === 0) { console.log(x + ' is even!'); return true; } });
But there is another important feature to know about arrow functions: arrow functions are bound to their lexical scope. This means that inside an arrow function the value of this is the same as in the parent block. Let's clarify this concept with an example:
function DelayedGreeter(name) { this.name = name; }
let greeter = new DelayedGreeter('World'); greeter.greet(); // will print "Hello undefined"
In this code we are defining a simple greeter prototype which accepts a name as argument. Then we are adding the greet method to the prototype. This function is supposed to print Hello and the name defined in the current instance after 500 milliseconds it has been called. But this function is broken because inside the timeout callback function (cb), the scope of the function is different from the scope of the greet method and the value of this is undefined
Before Node.js introduced support for arrow functions, to fix this we needed to change the greet function using bind as follows:
But since we have now arrow functions and since they are bound to their lexical scope, we can just use an arrow function as callback to solve the issue:
This is a very handy feature, most of the time it makes our code more concise and straightforward.
Let and Const
Historically JavaScript offered only function scope and global scope to control the lifetime and the visibility of a variable. For instance if you declare a variable inside the body of an if statement the variable will be accessible even outside the statement, whether or not the body of the statement has been executed. Let's see it more clearly with an example:
if (false) { var x = "hello"; }
console.log(x);
This code will not fail as we might expect and it will just print undefined in the console. This behavior has been cause of many bugs and frustration and that is the reason why ES2015 introduces the let keyword to declare variables that respect the block scope. Let's replace var with let in our previous example:
if (false) { let x = "hello"; } console.log(x);
This code will raise a ReferenceError: x is not defined because we are trying to print a variable that has been defined inside another block scope.
To give a more meaningful example we can use the let keyword to define a temporary variable to be used as an index for a loop:
for (let i=0; i < 10; i++) { // do something here } console.log(i);
As in the previous example, this code will raise a ReferenceError: i is not defined error.
This protective behavior introduced with let allows us to write safer code, because if we accidentally access variables that belong to another scope we will get an error that will
allow us to easily spot the bug and avoid potentially dangerous side effects.
ES2015 introduces also the const keyword. This keyword allows to declare read-only variables. Let's see a quick example:
const x = 'This will never change'; x = '...';
This code will raise a TypeError: Assignment to constant variable error because we are trying to change the value of a constant.
Constants are extremely useful when you want to protect a value from being accidentally changed in your code.
Class syntax
ES2015 introduces a new syntax to leverage prototypical inheritance in a way that should sound more familiar to all the developers that come from classic object oriented languages such as Java or C#. It's important to underline that this new syntax does not change the way objects are managed internally by the JavaScript runtime, they still inherits properties and functions through prototypes and not through classes. While this new alternative syntax can be very handy and readable, as a developer is important to understand that it is just a syntactic sugar.
Let's see how it works with a trivial example. First of all, let's describe a Person using the classic prototype based approach:
As you can see a person has a name, a surname and an age. We are providing to our prototype a helper function that allows us to easily get the full name of a person object and a generic helper function accessible directly from the Person prototype that returns the
older person between two Person instances given as input.
Let's see now how we can implement the same example using the new handy ES2015 class syntax:
This syntax results more readable and straightforward to understand. We are explicitly stating what is the constructor for the class and declaring the function older as a static method.
Both implementations are completely interchangeable, but the real killer feature of the new syntax is the possibility of extending the Person prototype using the extend and the super keywords. Let's assume we want to create the PersonWithMiddlename class:
class PersonWithMiddlename extends Person { constructor (name, middlename, surname, age) { super(name, surname, age); this.middlename = middlename; }
What is worth noticing in this second example is that the syntax really resembles what is common in other object oriented languages. We are declaring the class from which we want to extend, we define a new constructor that can call the parent one using the keyword super and we override the getFullName method to add support for our middle name.
Enhanced Object literals
As long with the new class syntax, ES2015 introduced an enhanced object literals syntax. This syntax offers a shorthand to assign variables and functions as members of the object, allows to define computed member names at creation time and also handy setters and getters methods.
Let's make all of this clear with some examples:
let x = 22; let y = 17; let obj = { x, y };
obj will be an object containing the keys x and y, respectively with the values 22 and 17.
We can do the same thing with functions:
module.exports = { square (x) { return x * x; }, cube (x) { return x * x * x; }
};
In this case we are writing a module that exports the functions square and cube mapped to properties with the same name. Notice that we don't need to specify the keyword function.
Let's see in another example how we can use computed property names:
In this example we are defining three properties, two normal name and surname and a computed property fullname through the set and get syntax. As you can see from the result of the console.log calls, we can access the computed property as if it was a regular property inside the object for both reading and writing the value. It's worth noticing that the second call to console.log prints Alan Turing . This happens because by default every set function returns the value that is returned by the get function for the same property, in this case get fullname.
Map and Set collections
As JavaScript developers we are used to create hash maps using plain objects. ES2015 introduces a new prototype called Map that is specifically designed to leverage hash map collections in a more secure, flexible and intuitive way. Let's see a quick example:
let profiles = new Map(); profiles.set('twitter', '@adalovelace'); profiles.set('facebook', 'adalovelace'); profiles.set('googleplus', 'ada');
As you can see the Map prototype offers several handy methods like set, get, has and delete and the size attribute. We can also iterate through all the entries using the for ... of syntax. Every entry in the loop will be an array containing the key as first element and the value as second element. This interface is very intuitive and self-explanatory.
But what makes maps really interesting is the possibility of using functions and objects as keys of the map and this was something that is not entirely possible using plain objects, because with objects all the keys are automatically casted to strings. This opens new opportunities, for example we can build a micro testing framework leveraging this feature:
As you can see in this last example, we are storing functions as keys and expected results as values. Then we can iterate through our hash map and execute all the functions. It's also worth noticing that when we iterate through the map all the entries respect the order in which they have been inserted, this is also something that was not always guaranteed with plain objects.
Along with Map, ES2015 also introduces the Set prototype. This prototype allows us to easily construct sets, which means lists with unique values.
let s = new Set([0, 1, 2, 3]);
s.add(3); // will not be added s.size; // 4 s.delete(0); s.has(0); // false
for (let entry of s) { console.log(entry); }
As you can see in this example the interface is quite similar to the one we have just seen for
Map. We have the methods add (instead of set), has and delete and the property size. We can also iterate through the set and in this case every entry is a value, in our example it will be one of the numbers in the set. Finally, sets can also contain objects and functions as values.
WeakMap and WeakSet collections
ES2015 defines also a weak version of the Map and the Set prototypes called WeakMap and WeakSet.
WeakMap is quite similar to Map in terms of interface, there are only two main differences: there is no way to iterate all over the entries and it allows to have only objects as keys. While this might seem as a limitation there is a good reason behind it, in fact the distinctive feature of WeakMap is that it allows objects used as keys to be garbage collected when the only reference left is inside a WeakMap. This is extremely useful when we are storing some metadata associated to an object that might get deleted during the regular lifetime of the application. Let's see an example:
let obj = {}; let map = new WeakMap(); map.set(obj, {metadata: "some_metadata"}); console.log(map.get(obj)); // {metadata: "some_metadata"} obj = undefined; // now obj and metadata will be cleaned up in the next gc cycle
In this code we are creating a plain object called obj. Then we store some metadata for this object in a new WeakMap called map. We can access this metadata with the map.get method. Later, when we cleanup the object by assigning its variable to undefined, the object will be correctly garbage collected and its metadata removed from the map.
Similarly, to WeakMap, WeakSet is the weak version of Set: it exposes the same interface of Set but it allows to store only objects and cannot be iterated. Again the difference with Set is that WeakSet allows objects to be garbage collected when their only reference left is in the weak set.
It's important to understand that WeakMap and WeakSet are not better or worse than Map and Set, they are simply more suitable for different use cases.
Template Literals
ES2015 offers a new alternative and more powerful syntax to define strings: the template
literals. This syntax uses back ticks (`) as delimiters and offers several benefits compared to regular quoted (') or double-quoted (") delimited strings. The main ones are that template literals syntax can interpolate variables or expressions using ${expression} inside the string (this is the reason why this syntax is called template ) and that strings can finally be multiline. Let's see a quick example:
let name = "Leonardo"; let interests = ["arts", "architecture", "science", "music", "mathematics"];
let birth = { year : 1452, place : 'Florence' }; let text = `${name} was an Italian polymath interested in many topics such as ${interests.join(', ')}.
He was born in ${birth.year} in ${birth.place}.`; console.log(text);
This code will print:
Leonardo was an Italian polymath interested in many topics such as arts, architecture, science, music, mathematics. He was born in 1452 in Florence.
A more extended and up to date list of all the supported ES2015 features is available in the official Node.js documentation: https://nodejs.org/en/docs/es6/
The reactor pattern
In this section, we will analyze the reactor pattern, which is the heart of the Node.js asynchronous nature. We will go through the main concepts behind the pattern, such as the single-threaded architecture and the non-blocking I/O, and we will see how this creates the foundation for the entire Node.js platform.
I/O is slow
I/O is definitely the slowest among the fundamental operations of a computer. Accessing the RAM is in the order of nanoseconds (10e-9 seconds), while accessing data on the disk or the network is in the order of milliseconds (10e-3 seconds). For the bandwidth, it is the same story; RAM has a transfer rate consistently in the order of GB/s, while disk and network varies from MB/s to, optimistically, GB/s. I/O is usually not expensive in terms of CPU, but it adds a delay between the moment the request is sent and the moment the operation completes. On top of that, we also have to consider thehuman factor; often, the input of an
application comes from a real person, for example, the click of a button or a message sent in a real-time chat application, so the speed and frequency of I/O don't depend only on technical aspects, and they can be many orders of magnitude slower than the disk or network.
Blocking I/O
In traditional blocking I/O programming, the function call corresponding to an I/O request will block the execution of the thread until the operation completes. This can go from a few milliseconds, in case of a disk access, to minutes or even more, in case the data is generated from user actions, such as pressing a key. The following pseudocode shows a typical blocking read performed against a socket:
//blocks the thread until the data is available data = socket.read(); //data is available print(data);
It is trivial to notice that a web server that is implemented using blocking I/O will not be able to handle multiple connections in the same thread; each I/O operation on a socket will block the processing of any other connection. For this reason, the traditional approach to handle concurrency in web servers is to kick off a thread or a process (or to reuse one taken from a pool) for each concurrent connection that needs to be handled. This way, when a thread gets blocked for an I/O operation it will not impact the availability of the other requests, because they are handled in separate threads.
The following image illustrates this scenario:
The preceding image lays emphasis on the amount of time each thread is idle, waiting for new data to be received from the associated connection. Now, if we also consider that any type of I/O can possibly block a request, for example, while interacting with databases or with the filesystem, we soon realize how many times a thread has to block in order to wait for the result of an I/O operation. Unfortunately, a thread is not cheap in terms of system resources, it consumes memory and causes context switches, so having a long running thread for each connection and not using it for most of the time, is not the best compromise in terms of efficiency.
Non-blocking I/O
In addition to blocking I/O, most modern operating systems support another mechanism to access resources, called non-blocking I/O. In this operating mode, the system call always returns immediately without waiting for the data to be read or written. If no results are available at the moment of the call, the function will simply return a predefined constant, indicating that there is no data available to return at that moment.
For example, in Unix operating systems, the fcntl() function is used to manipulate an existing file descriptor to change its operating mode to non-blocking (with the O_NONBLOCK flag). Once the resource is in non-blocking mode, any read operation will fail with a return code, EAGAIN, in case the resource doesn't have any data ready to be read.
The most basic pattern for accessing this kind of non-blocking I/O is to actively poll the resource within a loop until some actual data is returned; this is called busy-waiting. The following pseudocode shows you how it's possible to read from multiple resources using
//there is no data to read at the moment continue;
if(data === RESOURCE_CLOSED)
//the resource was closed, remove it from the list resources.remove(i); else
//some data was received, process it consumeData(data); } }
You can see that, with this simple technique, it is already possible to handle different resources in the same thread, but it's still not efficient. In fact, in the preceding example, the loop will consume precious CPU only for iterating over resources that are unavailable most of the time. Polling algorithms usually result in a huge amount of wasted CPU time.
Event demultiplexing
Busy-waiting is definitely not an ideal technique for processing non-blocking resources, but luckily, most modern operating systems provide a native mechanism to handle concurrent, non-blocking resources in an efficient way; this mechanism is called synchronous event demultiplexer or event notification interface. This component collects and queues I/O events that come from a set of watched resources, and block until new events are available to process. The following is the pseudocode of an algorithm that uses a generic synchronous event demultiplexer to read from two different resources:
//This read will never block and will always return data data = event.resource.read();
if(data === RESOURCE_CLOSED)
//the resource was closed, remove it from the watched list
demultiplexer.unwatch(event.resource); else //some actual data was received, process it consumeData(data);
These are the important steps of the preceding pseudocode:
The resources are added to a data structure, associating each one of them with a 1. specific operation, in our example a read.
The event notifier is set up with the group of resources to be watched. This call is 2. synchronous and blocks until any of the watched resources is ready for a read. When this occurs, the event demultiplexer returns from the call and a new set of events is available to be processed.
Each event returned by the event demultiplexer is processed. At this point, the 3. resource associated with each event is guaranteed to be ready to read and to not block during the operation. When all the events are processed, the flow will block again on the event demultiplexer until new events are again available to be processed. This is called the event loop.
It's interesting to see that with this pattern, we can now handle several I/O operations inside a single thread, without using a busy-waiting technique. The following image shows us how a web server would be able to handle multiple connections using a synchronous event demultiplexer and a single thread:
The previous image helps us understand how concurrency works in a single-threaded application using a synchronous event demultiplexer and non-blocking I/O. We can see
that using only one thread does not impair our ability to run multiple I/O bound tasks concurrently. The tasks are spread over time, instead of being spread across multiple threads. This has the clear advantage of minimizing the total idle time of the thread, as clearly shown in the image. This is not the only reason for choosing this model. To have only a single thread, in fact, also has a beneficial impact on the way programmers approach concurrency in general. Throughout the book, we will see how the absence of in-process race conditions and multiple threads to synchronize, allows us to use much simpler concurrency strategies.
In the next chapter, we will have the opportunity to talk more about the concurrency model of Node.js.
The reactor pattern
We can now introduce the reactor pattern, which is a specialization of the algorithms presented in the previous section. The main idea behind it is to have a handler (which in Node.js is represented by a callback function) associated with each I/O operation, which will be invoked as soon as an event is produced and processed by the event loop. The structure of the reactor pattern is shown in the following image:
This is what happens in an application using the reactor pattern:
The application generates a new I/O operation by submitting a request to the 1. Event Demultiplexer. The application also specifies a handler, which will be invoked when the operation completes. Submitting a new request to the Event Demultiplexer is a non-blocking call and it immediately returns the control back to the application.
When a set of I/O operations completes, the Event Demultiplexer pushes the new 2. events into the Event Queue. At this point, the Event Loop iterates over the items of the Event Queue. 3. For each event, the associated handler is invoked. 4. The handler, which is part of the application code, will give back the control to 5. the Event Loop when its execution completes (5a). However, new asynchronous operations might be requested during the execution of the handler (5b), causing
new operations to be inserted in the Event Demultiplexer (1), before the control is given back to the Event Loop. When all the items in the Event Queue are processed, the loop will block again on 6. the Event Demultiplexer which will then trigger another cycle.
The asynchronous behavior is now clear: the application expresses the interest to access a resource at one point in time (without blocking) and provides a handler, which will then be invoked at another point in time when the operation completes.
A Node.js application will exit automatically when there are no more pending operations in the Event Demultiplexer, and no more events to be processed inside the Event Queue.
We can now define the pattern at the heart of Node.js.
Pattern (reactor): handles I/O by blocking until new events are available from a set of observed resources, and then reacting by dispatching each event to an associated handler.
The non-blocking I/O engine of Node.js libuv
Each operating system has its own interface for the Event Demultiplexer: epoll on Linux, kqueue on Mac OS X, andI/O Completion Port API (IOCP) on Windows. Besides that, each I/O operation can behave quite differently depending on the type of the resource, even within the same OS. For example, in Unix, regular filesystem files do not support nonblocking operations, so, in order to simulate a non-blocking behavior, it is necessary to use a separate thread outside the Event Loop. All these inconsistencies across and within the different operating systems required a higher-level abstraction to be built for the Event Demultiplexer. This is exactly why the Node.js core team created a C library called libuv, with the objective to make Node.js compatible with all the major platforms and normalized the non-blocking behavior of the different types of resource; libuv today represents the low-level I/O engine of Node.js.
Besides abstracting the underlying system calls, libuv also implements the reactor pattern, thus providing an API for creating event loops, managing the event queue, running asynchronous I/O operations, and queuing other types of tasks.
A great resource to learn more about libuv is the free online book created by Nikhil Marathe, which is available at http://nikhilm.github.io/uvbook/
The recipe for Node.js
The reactor pattern and libuv are the basic building blocks of Node.js, but we need the following three other components to build the full platform:
A set of bindings responsible for wrapping and exposing libuv and other lowlevel functionality to JavaScript.
V8, the JavaScript engine originally developed by Google for the Chrome browser. This is one of the reasons why Node.js is so fast and efficient. V8 is acclaimed for its revolutionary design, its speed, and for its efficient memory management.
A core JavaScript library (called node-core) thatimplements the high-level Node.js API.
Finally, this is the recipe of Node.js, and the following image represents its final architecture:
Summary
In this chapter, we have seen how the Node.js platform is based on a few important principles that provide the foundation to build efficient and reusable code. The philosophy and the design choices behind the platform have, in fact, a strong influence on the structure and behavior of every application and module we create. Often, for a developer moving from another technology, these principles might seem unfamiliar and the usual instinctive
reaction is to fight the change by trying to find more familiar patterns inside a world which in reality requires a real shift in the mindset.
On one hand, the asynchronous nature of the reactor pattern requires a different programming style made of callbacks and things that happen at a later time, without worrying too much about threads and race conditions. On the other hand, the module pattern and its principles of simplicity and minimalism creates interesting new scenarios in terms of reusability, maintenance, and usability.
Finally, besides the obvious technical advantages of being fast, efficient, and based on JavaScript, Node.js is attracting so much interest because of the principles we have just discovered. For many, grasping the essence of this world feels like returning to the origins, to a more humane way of programming for both size and complexity and that's why developers end up falling in love with Node.js. The introduction of ES2015 makes things even more interesting and open new scenarios for being able to embrace all these advantages with an even more expressive syntax.
In the next chapter, we will get into deep of the two basic asynchronous patterns used in Node.js: the callback pattern and the event emitter. We will also understand the difference between synchronous and asynchronous code and how to avoid to write unpredictable functions.
2
Node.js Essential Patterns
Embracing the asynchronous nature of Node.js is not trivial at all, especially if coming from a language such as PHP where it is not usual to deal with asynchronous code.
With synchronous programming we are used to imagine code as a series of consecutive computing steps defined to solve a specific problem. Every operation is blocking which means that only when an operation is completed it is possible to execute the next one. This approach makes the code easy to understand and debug.
Instead, in asynchronous programming some operations like reading a file or performing a network request can be executed asynchronously in the background. When an asynchronous operation is performed, the next one is executed immediately, even if the previous (asynchronous) operation has not finished yet. The operations pending on the background can complete at any time and the whole application should be programmed to react in the proper way when an asynchronous finishes.
While this non-blocking approach can guarantee superior performances compared to an always-blocking scenario, it provides a paradigm that is hard to picture in mind and that can get really cumbersome when dealing with more advanced applications that requires complex control flows.
Node.js offers a series of tools and design patterns to deal optimally with asynchronous code and it's important to learn how to use them to gain confidence with asynchronous coding and write applications that are both performant and easy to understand and debug.
In this chapter, we will see two of the most important asynchronous patterns: callback and event emitter.
The callback pattern
Another random document with no related content on Scribd:
deux ailes d'infanterie devaient se déployer, et envelopper l'armée de Constantin, déja rompue par la cavalerie. Le prince, qui avait le coup d'œil militaire, comprit le dessein des ennemis à l'ordre de leur bataille. Il place des corps à droite et à gauche pour faire face à l'infanterie et arrêter ses mouvements. Pour lui, il se met au centre en tête de cette redoutable cavalerie. Quand il la voit sur le point de heurter le front de son armée, au lieu de lui résister, il ordonne à ses troupes de s'ouvrir: c'était un torrent qui n'avait de force qu'en ligne droite; le fer dont elle était revêtue ôtait toute souplesse aux hommes et aux chevaux. Mais dès qu'il la voit engagée entre ses escadrons, il la fait enfermer et attaquer de toutes parts, non pas à coups de lances et d'épée, on ne pouvait percer de tels ennemis, mais à grands coups de masses d'armes. On les assommait, on les écrasait sur la selle de leurs chevaux, on les renversait, sans qu'ils pussent ni se mouvoir pour se défendre, ni se relever quand ils étaient abattus. Bientôt ce ne fut plus qu'une horrible confusion d'hommes, de chevaux, d'armes, amoncelés les uns sur les autres. Ceux qui échappèrent à ce massacre voulurent se sauver à Turin avec l'infanterie: mais ils en trouvèrent les portes fermées, et Constantin, qui les poursuivit l'épée dans les reins, acheva de les tailler en pièces au pied des murailles.
Cette victoire, qui ne coûta point de sang au vainqueur, lui ouvrit les portes de Turin. La plupart des autres places entre le Pô et les Alpes lui envoyèrent des députés pour l'assurer de leur soumission: toutes s'empressaient de lui offrir des vivres. Sigonius, sur un passage de saint Jérôme, conjecture que Verceil fit quelque résistance, et que cette ville fut alors presque détruite. Il n'en est point parlé ailleurs. Constantin alla à Milan, et son entrée devint une espèce de triomphe par la joie et les acclamations des habitants, qui ne pouvaient se lasser de le voir et de lui applaudir comme au libérateur de l'Italie.