Writing Good Javascript: Let's Not Forget About Performance

When it comes to programming,I really like the idea of the Ring of Power in Lord of the Rings: one language to rule them all. It’s true, each programming language has its advantages and disadvantages, and Javascript started as a quick solution for interactivity with browsers. However, Javascript has evolved and matured a lot over the years, so much so that it’s now in a phase where you can achieve a lot with it. Just like with other languages it can’t do everything, but in web environments, the birthplace of Javascript, I truly believe it can become the king.

That being said, evolution has its challenges, especially with a language that was not originally conceived to do what it’s currently capable of. This means some bad habits have to carried over a few generations.

Quite some time ago, we as programmers had to be really efficient when it came to how we used our resources since they were pretty limited. We worried a lot about things like processor cycles, memory consumption and leaks, garbage collection, file size and download size.

Just as an example, Windows XP used 128MB of RAM and a fully functioning OS used 128MB of RAM to work. Today we see apps and frameworks that use 128MB of RAM just to do a "hello world." Sometimes we don’t even know why so much memory is needed, and then we start noticing that some libraries or packages are not as performant as you’d hoped. Some would even come with jokes included, like babel-core which comes with an ASCII Art of Guy Fieri just for fun: https://github.com/babel/babel/pull/3641/files

Let’s go back even further and think about the fact that when the entire Super Mario Bros. game was released in 1985, it was only 31 KB in size. 

Fast forward 32 years later and we are writing apps that eat up RAM like dogs eat treats, take gigabytes of space and require gigahertz of processing power. What happened?

Short answer: economics. We live in a world where we need to push code faster and deploy more frequently. Taking the time to see where we can save a couple of bytes of space costs money. Plus, resources are now readily available and cheap, so we’re not as meticulous with our code as our colleagues from decades ago used to be. 

Now, let's focus this article on Javascript, since Javascript is suffering from the same disease as every other language out there, and some small tips that will help you write more efficient Javascript code. Most of these tips are just common sense, but that makes a huge difference when dealing with thousands or hundreds of thousands of records.


Loop Chaining

One of the nicest patterns we use in Javascript is chaining. We build the results of one function in a way that we can use them to perform another action. To many newcomers of the language, jQuery was the teacher of this concept. The problem is, if you learned it from jQuery, you most likely didn’t learn how to use it properly, and more importantly, when to use it.

Since ES5 and ES6 came into the spotlight and more browsers supported them, a plethora of array methods emerged. ES6s cleaner syntax makes it painless to chain codes in order to produce our desired results. The problem lies in how we are chaining. 

As an example, consider this function that iterates through an object and filters any user younger than 18 years old and then returns the names of the users not filtered.

const users = [{
  name: 'Constantine',
  age: 23
}, {
  name: 'Adam',
  age: 14
}, {
  name: 'Jennifer',
  age: 22

const valid = users.filter(({ age }) => age >= 18).map(({ name }) => name);

The code seems totally legit. It uses two ES5 methods, filter and map, and ES6 arrow functions and object de-structuring. It’s well written and easy to read. Looks great!

Chaining iterators are a common error I flag in my code reviews. Filter and map are iterators and they both get into a loop that cycles n-times, depending on how many items the array has. Perhaps now that there are only three items in the array, it’s not an issue, but when there are thousands or hundreds of thousands then we have a performance issue.

If there were 100,000 items in that array and you filtered half of them, there would still be 50,000 items to iterate over again to retrieve their names. So your program will perform 150,000 iterations rather than the 100,000 iterations it actually needed in order to perform the action. 

What would be the correct way of doing this? There are actually two ways, a single loop or using the new reduce method.

// the single loop
const names = [];
users.forEach(({ age, name }) => {
  age >= 18 && names.push(name);
// the reduce method
const valid = users.reduce((acc, { age, name }) => {
  return (age >= 18) ? [...ac, name] : acc;
}, []);

The reduce syntax might look less readable, but it does the exact same thing as the single loop, the way we used to do things before ES5 array methods. By the way, I could have written it in a single line, but I split it to make it a bit more readable.

Trigger-Happy Arrow Functions

Trigger-happy is a term used to describe the irresponsible use of firearms when a subject pulls the trigger and shoots the gun before clearly identifying the target. The programming equivalent of this happens a lot now in Javascript with Arrow Functions.

 One of the questions I usually ask candidates is if they know the difference between an arrow function and a good old fashion function in Javascript. So, if you’re ever interviewed by me for a position at Admios you’re in luck because you’ve just found the answer to one question most people flunk.

No, arrow functions are not anonymous functions in Javascript but with a cleaner, nicer syntax. They’re actually very different from Javascript functions in that they do not have a scope.

And since they don’t have a scope, you obviously can’t bind a different scope since they always use the scope of their parent.

Let’s check out an example to notice the difference:

// class with arrow function
function myClass() {
  this.value = 1;
  this.myMethod = () => {
    this.anotherValue = 2;
    return this;
const test = new myClass();
test.myMethod(); // myClass  {value: 1, anotherValue: 2, myMethod: function}

Since the arrow function does not have a scope, it inherits the scope of the parent. In this case, the myClass object. Now let’s watch what happens with an anonymous function when we try to bind a different scope:

// now with an anonymous function
function myClass() {
  this.value = 1;
  this.myMethod = function() {
    this.anotherValue = 2;
   return this;
const anotherTest = new myClass();
anotherTest.myMethod.call({another: 'scope'}, ); // myClass {value: 1, anotherValue: 2, myMethod: function}

As you can see, it still pulls the scope from the parent, and now the supposedly new scope we passed to the function is nowhere to be found.

This is especially problematic when working on a modular app which needs to be changing or applying scopes to handlers.

Now don’t get me wrong, I love arrow functions, but the trick is knowing when to use them. So, when should we use arrow functions?

  • If you want the function to be accessible via public or private interface, you should not use arrow functions. These methods usually need to handle changing scopes.
  • Arrow functions work perfectly with iterators and other fire-and-forget actions. If you would write a lambda for it, an arrow function is the perfect fit.
  • If you have to pass the scope as an argument to the function, then you should not use an arrow function.

So what do arrow functions have to do with performance? Programmers sometimes forget that arrow functions don’t posses scopes, sometimes the arrow function is within another arrow function, and the scope ends up being something completely unexpected for the programmer. Then the programmer ends up writing a contraption to make the wanted scope accessible to the arrow function. These contraptions somehow tend to stick as a reference to another object being used, so the garbage collection ends up skipping them and in the long run they become a memory leak.

Pulling Full Utility Libraries for a Single Function

Thanks to broadband internet connections, developers tend to care less about build sizes. They just jam up the entire app in a single page app which weighs several megabytes in size. A basic Angular 1 build with all its modules loaded takes about 1 MB for a single Hello World app unoptimized! 

Even with Webpack, Gulp or other task runners compressing and running through the code, most of these libraries are not optimized in a way that these runners can just pick the function needed, so they end up pulling up the full library.

 Therefore, we end up with huge libraries in our build that increase the build size just for the one small function we need from them.

For example, pulling the whole jQuery library just to use the Ajax functions. 

What’s the solution then? Simple, always prefer native. Sure there are utilitarian libraries like Lodash and Underscore that provide great functions, but many of those are now native.

 For example, rather than using Lodash’s each, map, filter, etc., you can use Javascript native for each, map, filter and other functions. And if you really need a function that’s in Lodash but not in vanilla Javascript you can pull that function by doing a require(‘lodash/fn_name). So let’s say you want the intersection function. You can import it with require(‘lodash/intersection’) and once you build, you won’t pull the whole library, just the code for that function.

 But, stick to vanilla javascript whenever you can. There’s almost no need for promises now because we have native promises now and Async and Await, so Q and Bluebird might not be necessary.

 jQuery for DOM traversing? Not needed! We have document.querySelector now.

 Toastr popups? For what? We have desktop notifications, which are cooler!

 Always go with Vanilla Javascript. And if you’re looking for compatibility, you can ensure it with Polyfills which will add the native functions you like to use that might not be supported by some browsers.

Never Forget: Javascript is single threaded

Javascript is awesome. It’s an event driven language, meaning you can have one function executing asynchronously without it blocking the rest of the code.

 However, developers tend to think this means Javascript can have millions of asynchronous functions executing and the main trunk of the code will still keep executing. This is not true at all!

All those asynchronous functions run on a single thread, waiting to finish and interrupt code execution.

 But fear no more, because although Javascript is single threaded, that doesn’t mean you can’t have more “threads” like other programming languages.

Behold! The Webworkers, Javascript’s solution to threading! With them, you can have multiple threads. How? Well, you can put a function in a Webworker. Basically Javascript puts that code into a separate process, usually a different core in the CPU, so it’s technically another thread. Once that code finishes executing, the results are passed back to the main thread.

This is similar to Node,the backend Javascript, but we don’t have Webworkers there. Instead, we have a cluster mode which allows us to span child processes were we can put our code. We can even spread our app into “x” amount of threads easily, where “x” is the number of cores in the CPU, and automatically load balance the code in the threads.

This is a topic for a different article, so let’s get back to it in the future, shall we?



 There you go! Those are just a couple of mistakes we developers  usually see make in our code that can affect performance. For Javascript, we rely so much on the end-user resources that we can become completely lazy when writing optimized code. And sometimes we don’t care at all and end up adding a picture of Guy Fieri in the build just for fun.

My best piece of advice is this: when you are programming in Javascript, or any modern language, program like you were in 1985. Limit yourself to a finite amount of resources, like the developers of Super Mario Bros. Only then, when you think about a finite amount of resources, can you truly write optimal code that will perform well anywhere, without the need of expensive machines or resources.

Author: Fernando De Vega
LinkedIn: https://www.linkedin.com/in/fernandodvj/

Fernando De Vega

Admios, Av. Gallard, Panamá, Panamá, Panama

DevOps, Telecommunications Engineer, Team Lead, and Technology enthusiasts. Loves NodeJS and everything related to it.