ln Command – Linux

Have you ever thought about shortcuts when navigating through the linux filesystem? I am not referring to creating desktop shortcuts though. I accidentally bumped into the link also called ln command and the concept amazed me. I decided to write on it.

The ln utility creates a new directory entry (linked file) which has the same modes as the original file.  It is useful for maintaining multiple copies of a file in many places at once without using up storage for the copies; instead, a link points to the original copy.  There are two types of links; hard links and symbolic links.  How a link points to a file is one of the differences between a hard and symbolic link.

Does the above quote makes sense to you? To some, perhaps not. Note that the above definition is what shows on the man page of the linux shell.

ln creates links between files

The above is my preferred definition. But what is a link?

A link is an entry in your filesystem that connects a file name to the actual data on disk. This implies that with multiple file names, we can have all of them point to a single data on disk using a link. Let’s illustrate:

# Create a simple file and name it first.txt
echo "A sample file" > first.txt
# Display the contents of the file
cat first.txt

In creating this file, the filesystem wrote the data to disk (talking about the actual bytes). This is nothing special. Another thing also happened! The filesystem also linked this data(now on disk) to the filename first.txt.  Confusing? The trick is to notice that the filename first.txt and the actual data A sample file(the actual bytes of data) are 2 separate entries in the filesystem. This implies that renaming does not alter the actual data on disk.

At this point, it is safe to ask questions. What happens to the data and the link when we delete something? How can we create manual links?

The popular command for deleting a file/directory is the rm command. Basically, deleting a file means deleting one of the links to its data. No wonder after calling cat on the given file name we get the well known results:

cat: first.txt: No such file or directory

Similarly, we can also delete a file by unlinking. Using unlink first.txt is same as rm first.txt.

I hope it makes sense now. Don’t worry if you still don’t get it. Perhaps, learning how to create links will make things clearer. How then do we manually create links? Let’s get into some commands to examine this:

# Create a simple file and name it first.txt
echo "A sample file" > first.txt  
 
# Display the contents of the file
cat first.txt
# Create another link to a filename
link first.txt second.txt
# Display the contents of the second file
cat second.txt

# Append data to the first file
echo "Some extra data" >> first.txt
# Display contents of first.txt
cat first.txt
# Confirm changes from second.txt
cat second.txt

Is it not amazing?  Both files reflected the changes. Why? Because they both are linked to the actual file on disk where the change was applied. It is clear now that by using the ln or the link command, a link to the actual data on disk was created successfully. We can both smile.

We can only access data on disk only if there is a link to it. When files are removed, we are only removing links but the actual data is still persisted on disk.

Hmm, still coming up with questions? What if I am working on a partitioned hard drive? Will it work across different different partitions? The answer is no. Linking only works in the same filesystem.

Now the hard stuffs. If you had browsed the man page for ln or link, you would probably come across hard links and maybe assumed there is also soft links. Well, its kind of similar to hard and soft liquor and you are right. Both do soft links really exist? What are they?

It is very relevant to note that what we have talked about so far is hard links. A soft link preferrably called Symbolic links, symlinks for short, links to another link instead of linking to the actual data on disk. Whew!! A mouthful eh. No worries. Let’s illustrate with some commands:

# Create a simple file and name it first.txt
echo "A sample file" > first.txt   
# Display the contents of the file
cat first.txt
# Create another link to a filename
link -s first.txt second.txt
# Display the contents of the second file
cat second.txt

You may have noted that symlinks are created with the -s flag. One key difference is that removing the file that symlinks point to will break the link. Confused? Let’s delve into more commands:

# Remove first.txt
rm first.txt
# Display second.txt
cat second.txt

What happened after removing first.txt? The famous result:

cat: second.txt: No such file or directory

Even though second.txt still exists, we should not be confused with the error response. second.txt is a broken symlink. The specialty of symlinks comes into play when we deal with directories. Hard links cannot be used on directories but soft links can.

ln -s documents/ docs

The above command creates a symlink to the documents directory with the name docs. So when we work and manipulate files in the docs directory, its more like we are in the documents directory.

Now the conclusion. I am sure you had lots of fun trying to understand link and ln. I personally use them for creating shortcuts on my filesystem. You know what? When setting up your server blocks with nginx, links will be invaluable. You want to know? Go search online!

Webpack for Beginners — Part 3

And here we are at the third part in the Webpack series for beginners. If you would like to check out the previous parts, you can get them here (part 1) and there (part 2). In this part, we will simply explore **plugins**.

According to the documentation on Webpack, **plugins** are the backbone of webpack. Yes backbone! This means that Webpack itself is built on the same plugin system which we use in our Webpack configuration.

But what really is a plugin in Webpack? Well, think of plugins as a means to extend Webpack itself with your great ideas. For example, we already know that Webpack is a module bundler and generates bundled chunks in an output file. So let’s say we wanted to do some magic with that output file and make it smaller to improve load time. Well, we can simply achieve this by using a plugin. The hidden truth is that there’s a plugin already meant for that — UglifyJsPlugin(). For a list of plugins that is already bundled with the Webpack library, check here.

Before we tap into the world of creating plugins, let’s at least go through how to use one. The snippet below illustrates how to use the UglifyJsPlugin().

const webpack = require('webpack'); //to access built-in plugins
const path = require('path');

const config = {
  entry: './path/to/my/entry/file.js',
  output: {
    filename: 'my-first-webpack.bundle.js',
    path: path.resolve(__dirname, 'dist')
  },
  module: {
    rules: [
      {
        test: /\.(js|jsx)$/,
        use: 'babel-loader'
      }
    ]
  },
  plugins: [
    new webpack.optimize.UglifyJsPlugin() // Using the plugin
  ]
};

module.exports = config;

Plugins can simply be consumed (used) by adding the plugin with any configuration that comes with it as part of the list of plugins in the specified webpack.config.js file. So simple, isn’t it?

Some of the famous Webpack plugins include UglifyJsPluginHtmlWebpackPlugin — an external plugin not part of Webpack itself and EnvironmentPlugin. There are hordes of plugins available. And the good part of this is that if you feel there’s no plugin available (which is rare) for your use case, create one!!

Wait! How do I create a plugin for my use case? Remember that I stated earlier that plugins are the backbone of Webpack? The main reason why this is possible is that by using plugins, you are exposed to the Webpack compilerand the compilation process. Hmm, wondering how?

A webpack plugin is a JavaScript object with an apply property. This applyproperty is called by the Webpack compiler itself. It’s even possible to run child compilers or work in tandem with other loaders. Still confused? Let’s create a plugin.

module.exports = class CustomPlugin {
  apply(compiler) {
    console.log(compiler);  // Seeing all that is exposed 
  }
}

This simple plugin can be specified as part of our Webpack configuration.

const webpack = require('webpack'); //to access built-in plugins
const path = require('path');
const CustomPlugin = require('path-to-custom-plugin');
const config = {
  entry: './path/to/my/entry/file.js',
  output: {
    filename: 'my-first-webpack.bundle.js',
    path: path.resolve(__dirname, 'dist')
  },
  module: {
    rules: [
      {
        test: /\.(js|jsx)$/,
        use: 'babel-loader'
      }
    ]
  },
  plugins: [
    new webpack.optimize.UglifyJsPlugin(), // Using the plugin
    CustomPlugin()   // Using the plugin in the configuration
  ]
};

module.exports = config;

Whew!! All done now. We have configured a plugin called CustomPlugin and added it to the list of plugins in our config file. How useful is this? For now CustomPlugin is useless. But remember that we have all that’s exposed to us in the compiler. We can do whatever we want with the information presented to us in the compiler.

A good strategy on the other hand is to pass some sort of plugin specific configuration to our plugin via the constructor. Thus, we can use them as well as the information presented to us in the compiler to do our very own magic.

If we are interested in messing around (or having fun) with the compilation process, that is very much possible as well.

module.exports = class CustomPlugin {
    constructor(configOptions) {
      this.config = configOptions;
    }
    apply(compiler) {
        compiler.plugin("emit", (compilation, callback) => {
          console.log(compilation);
          callback();
    }
}

This is way cooler!! Yes, lots of stuffs. This is because the compilation process presents to us the whole dependency graph that Webpack traverses through. And everything related to the compilation process is exposed here. Tap in and mess it up!!

Now we simply update our webpack.config.js plugins.

“`

plugins: [
    new webpack.optimize.UglifyJsPlugin(), // Using the plugin
    CustomPlugin({       // Using the plugin in the configuration
      ...some plugin config options
    })  
  ]

Webpack for Beginners — Part 2

Wow. Now we are here! In the previous article of this series, we introduced ourselves to Webpack and what kind of problems it solves. If you missed that one, just check it here.

In this series, we will be setting up Webpack and all the configuration that come with it. So stay tight! Let’s begin by creating a directory for the project that we will be working with.

mkdir learning_webpack
cd learning_webpack

Good. Then let’s bring in webpack as a dev dependency in our package.json whiles creating an index.html and index.js file.

// Execute these in learning_webpack directory
npm init -y 
npm install webpack@3.3.0 --save-dev
touch index.html
touch index.js
// index.html
<!doctype html>
<html>
  <head>
    <meta charset="utf-8" />
    <title>Learning Webpack</title>
  </head>
  <body>

http://index.js </body> </html>

We have just set up a basic html and referenced an index.js file in it. That’s great work so far. Well done so far!!

In the previous article, we said webpack introduces what is called a Dependency Graph. It does that by making use of the magic require keyword in pulling in dependencies. This can be explained with a simple example as shown below:

// index.js
function HelloComponent(message) {
  this.message = message;
}
HelloComponent.prototype.appendMessage = function() {
  var p = document.createElement('p');
  p.innerText = this.message;
  var main = document.getElementById('app');
  main.appendChild(p);
}
var component = new HelloComponent('Hello world!');
component.appendMessage();

This is pretty basic and simple. There’s not much of a dependency issue here. But let’s assume that we had all messages in another file and we wanted to create other components that depended on this message file. See code below:

cd [/path/to/learning/webpack]
$touch messages.js
// messages.js
const messages = {
  hello: 'Hello world!',
  hi: 'Hi everyone!'
}
module.exports = messages;

We have just created a messages file in the same directory as index.js and exported a message object using commonJS syntax (this accounts for the presence of module.exports). Without module.exports, we would get 2 scripts in index.html ie index.js and messages.js. That’s pretty simple. Yay!!

Next, let’s change the HelloComponent a bit to now depend on the messages:

// index.js
var messages = require('./messages');
function HelloComponent() {
  this.message = message;
}
HelloComponent.prototype.appendMessage = function() {
  var p = document.createElement('p');
  p.innerText = this.message;
  var main = document.getElementById('app');
  main.appendChild(p);
}
var component = new HelloComponent(messages.hello);
component.appendMessage();

Note the use of require:

var messages = require('./messages');

Webpack understands the presence of require and is able to pull in the necessary dependency ie messages from messages.js.

Now how does Webpack comes in to play here? To better verify, let’s now execute the following commands.

./node_modules/.bin/webpack index.js dist/index.js

If you followed along, you would see the chunks emitted by Webpack and how the processing went. So webpack as an executable command takes 2 arguments ie entry file and output file (don’t worry if it doesn’t exist). Well, if you installed Webpack globally, you could also execute the above command as: webpack index.js dist/index.js. From executing the command, a new folder called dist is created with a file index.js. Try to inspect that file. It’s fun. Well, for a tip, Webpack inserts some boilerplate code before packing your code below it.

At this point, we need to tweak our script in index.html a little. It now looks like this:

http://dist/index.js

The conclusion at this point is that, no matter the dependent scripts, Webpack will resolve them into a simple, single script with everything managed for you. You only then focus on managing your code and Webpack then handles any dependencies. Isn’t that cool?

Now what about other images? In today’s fast-paced computing, Single Page Applications are really raging out there. Raaaaarrr!! HTML now seems to placed within JavaScript. And that includes CSS too. So what about the images that become dependencies in these files? Hmm…wondering.

Ok, now let’s talk about loaders. But, what about the question on images? I know. Understanding loaders will help answer that question. Anything that isn’t JavaScriptish but becomes a dependency in a JavaScript file can be processed by Webpack with the presence of a loader. So just find style-loader or css-loader if you are depending on some CSS. With images, perhaps find file-loader. So whatever isn’t a JavaScript but is a dependency in your module, find the its loader and shoot it! But how do we use a loader in Webpack?

Our work is getting messier. Let’s put everything Webpack needs to know into a configuration file — webpack.config.js (Well, that’s the default name for the configuration file).

// webpack.config.js
var path = require('path');
module.exports = {
  entry: 'index.js',
  output: {
    path: path.resolve(__dirname, 'dist'),
    filename: 'index.js'
  }
}

Still using the commonJS module, we export the config object. Well, does any component look confusing? I guess you are wondering where the path module came from? It’s path of the default modules that ships with Node.js. I’m just require(ing) it and making good use of it. Now we can just run one command:

webpack

Just that? Yes. Webpack will read instructions from webpack.config.js file and build the output file. Cool! So now, let’s add the loader for image. But first let’s append an img in the index.js file.

// index.js (a portion of it gets updated)
var image = require('./img.png') // Make sure that file exists
HelloComponent.prototype.appendImage = function() {
  var div = document.createElement('div');
  div.innerHTML ='<img src="' + image + '" />'; 
  var app = document.getElementById('app') app.appendChild(div); 
}

Then we add a loader in webpack.config.js file as follows:

// Execute the following commands
npm install file-loader -D
// webpack.config.js
var path = require('path');
module.exports = {
  entry: 'index.js',
  output: {
    path: path.resolve(__dirname, 'dist'),
    filename: 'index.js'
  },
  module: {
    rules: [{
      test: /\.png/,
      use: 'file-loader'
    }]
  }
}

modules accept a single rule or an array of rules. Each rule accepts one or more loaders. Go here to learn more about loaders.

Ok, so we end Part 2 of this series. We discussed how to setup Webpack. We also discussed how to handle JavaScript dependencies and non-JavaScript dependencies. There’s more amazing things Webpack can do. Stay tuned for more in the coming articles.

 

Webpack for Beginners — Part 1

So what is Webpack? You may have come here because you are interested in exactly what webpack is. Or maybe you thought it was a delicious meal you wanted to try it out. Anyway, we will be learning Webpack from the ground up. Buckle your seat belts!

So this is what Wikipedia says about Webpack

Webpack is an open-source JavaScript module bundler. Webpack takes modules with dependencies and generates static assets representing those modules.

So what is all this? I’m wondering myself. It’s clear that Webpack has something to do with modules and bundling. But what is a module? It gets more difficult to appreciate without anything to show. Let start with HTML 101:

http://module1.js
http://module2.js
http://module3.js
http://module4.js

This is very common to any web developer. But something is wrong. The arrangements of the scripts matters. Also, it gets slow since the browser has to download all of the scripts. Whew!!

But then we upgraded into using something like Gulp, Grunt or other options like them. What’s their end? We then could do something like this:

// build-config.js
var scripts = [
  'module1.js',
  'module2.js',
  'module3.js',
  'module4.js'
].concat().minify().dest('build.js');
// referencing
http://build.js

That’s better because we only make a single call for the script and that’s faster than the previous implementation. But we still need to do something about the order in which we position the scripts in the build-config.js. Another bad thing is that code can only communicate through global variables. That’s weird since polluting the global namespace is a bad practice that leads to a lot of trouble you should be avoiding.

Webpack presents something cool — Dependency Graph. Today we have CommonJS and even ES6 modules. We only build for the things we actually need. See this:

// module1.js
module.exports = {
 sayHi: () => console.log('saying hi')
}
// module2.js
const { sayHi } = require('./module1.js');

In module1.js, we only said we want sayHi to be exposed to the outside (with module.exports object) and imported it into module2.js using destructuring. Where from the require() method? It seems it is the one that identified that we are exporting something in module1.js and knows how to pull it in. The browser doesn’t understand require() since it’s not a global value. Build tools understand it and know how to use it.

With Webpack, we can do something entirely different. It allows you to use require on any static asset. This can be applied to js files (like shown above), images (png, gif, whatever), stylesheets (css, sass, whatever).

// An example with images
<img src={require('../images/food.png')} />

Hmm. That’s getting more complicated. It is natural to think that Webpack works with JavaScript files but how can it work with png files? Webpack has loaders responsible for that.

module: {
  rules: {
    test: /\.png/,
    loader: 'file-loader'
  }
}

So what happens is that Webpack scans for require calls referencing assets with the file extension matching the test under rules. I’ll elaborate further on it in a later post.

From the little shown, it is very clear that at any time, some file depends on another or an asset. Webpack calls this dependency. Webpack is always provided an entry file either in command line or in a configuration file. From there, it builds a dependency graph that includes every module the application needs (whether code or non-code). It then packages them into small bundles (chunks) to be loaded by the browser.

One more thing. You may have heard of Webpack Dev Server. This is a small Express server that builds your assets according to your webpack configuration. It is meant for local development. It basically serves static files (something to keep in mind) on a port (can be changed) on your localhost.

Ok, we are done with explaining what Webpack is. What are the advantages of using it?

  • You won’t deploy with assets missing. This brings some stability into our deployments.
  • You have control on how your assets are processed.
  • Hot module reloading and hot module replacement. (Will be discussed in an upcoming post)

What about the disadvantages? The bad side..

  • Complex to set up.

So you now know what Webpack is. In the next post in this series, we’ll look into setting up Webpack.


Version Control with Git (Part 1)

In modern programming, it’s common for developers to work together. Sometimes, we just want to track the ‘history’ of work. Many have tried so many approaches to solving these problems. The common approach is generally the use of Version Control Systems.

A Version Control System allows developers to work together and see the history of work done. Generally there are 2 types of Version Control System

  1. Centralized Version Control Systems (CVCS)
  2. Distributed Version Control Systems (DVCS)

I will be discussing Git which mainly falls under the Distributed Version Control System. But to get a clearer picture, let’s discuss with a common scenario: You and a few programmers have been tasked with developing a website for a company. Among the team, you are mainly proficient with JavaScript whiles another collaborator is great with HTML and CSS. The third friend is great with databases but he’s very helpful in other fields too.

Centralized Version Control Systems

With this approach, there’s a central store of the project files. Everyone is supposed to be a collaborator to this central store. A great implementation of this type of version control system is Microsoft’s Team Foundation Version Control (TFVC).

There is one major flaw to this approach though. If the main store goes down, it means that there is no collaboration until it get back up. Even worse, when the disk gets corrupted then everything is lost. Another problem lies in the fact that network connection is needed when we want to publish every change made to the work.

It is worth noting that current implementations have found a way around this problem though but not all.

Distributed Version Control Systems

Just like CVCS, there is a central store but each collaborator also has a mirrored copy of that repository locally. One can focus on the local copy solely and commit all the changes or only a single change to the central store. This means that if the server goes down, any of the collaborators can copy their version onto the server to restore it. An implementation like Git doesn’t rely on the central store so any operation can be performed locally. A great benefit to this is that, you only need network connection when you want to publish changes to the central store.

I’m sure you are relating the description of the various version control approaches to the scenario earlier. If you did that, then you should have realized long ago why DVCS are the most used approach in software development.

Git

Git is a free DVCS that is used for software development. It is aimed at speed and distributed linear workflows. Git was created by Linus Torvalds (Father of Linux operating system) in 2005.

Back to the scenario mentioned earlier. You’ll have to create a central Git store for the project. Every collaborator will push final changes there. This could be created at GitHub, BitBucket, etc.

After that, each collaborator will create a local git repository and connect it to the central git store. This can be achieved like so:


git init

git remote add origin {url_to_central_store}

git pull origin master

Line 1 initializes Git in the folder you are using. Make sure you are already in the project folder of interest before initializing Git. Line 2 adds the url to your central store to your local Git using the ‘remote’ command. It is clear that all git commands are preceded with ‘git’.

Using Git requires you use command prompt or a bash to work things around. To use local Git repository effectively, clear understanding  of the structure is needed.

In part 2,  I’ll explore the Git commands and how to enhance collaboration with it.

Enjoy coding!!

 

 

The Journey To Web Programming

Web Programming! If your aspire to be a web programmer, that’s great. But, the journey isn’t easy. You’ll need to brace yourself to face all the hardships and roadblocks just ahead. I intend to help you on the way though.

The first and major step to consider is how to start. Starting well means having a promise of a good finish. To start a career in web programming means at least learning HTML and CSS. Many will insist on learning JavaScript and they are absolutely right.  JavaScript, as a client side programming language gives life to your web application.

Let’s break the basics down to functionality. It’s good to assign roles to the different moving parts. First, HTML will represent the structure. CSS will design or style your web application. JavaScript injects logic into the app.

Keeping these moving parts and their roles in mind will help you in your journey to be a web programmer.

Other concerns include financial benefits and the availability of jobs. To clarify those concerns, kindly imagine the total number of websites and applications that you can think of. These are very many – you may have unending results in your mind. This clearly illustrates the booming nature of the industry. A career in Web Programming means a career for the future. At this stage, hop on the bus!

Now that the moving parts are clarified and career prospects are established, lets discuss the technologies involved. You may have heard of HTML5, CSS3 among others. Of course, these technologies all help in Web Application development. But, not all are needed. Let’s break them down based on the moving parts of a Web Application.

Structure

The structure of a Web Application necessarily has to be HTML (Hypertext Markup Language). Hence any version of HTML is OK. Current version means more features (Cool features). The most recent version is HTML5.

Style

Though inline style can be applied in HTML, external stylesheet written in CSS (Cascading Style Sheet) can really enforce separation of concerns. Just like HTML, latest version means more features. The current version is CSS3.

Aside that, you also need to know the existence of CSS preprocessors. Currently, there are 3 in the market – Sass, LESS and Stylus. A CSS preprocessor is a scripting language that extends CSS. It compiles(at base level, if converts) down to CSS. Basically, it’s meant to make writing CSS a lot easier than it is already.

Logic

The brain of the Web Application lies in its JavaScript. Learning JavaScript opens a wide range of possibilities to be applied to a web page. These can range from showing dynamic carousel (Slider) images to timer related actions.

JavaScript also comes in versions but you won’t be typically concerned about these. JavaScript is based on another language called ECMAScript. Its current version is ECMAScript 2015 released in mid of 2015. Though not all browsers fully support the features of ECMAScript 2015, it’s worth noting that there are transpilers for converting code between current and previous versions. Some of these transpilers include Babel and TypeScript(My favorite).

Now, i think all that’s needed has been laid down. With the above provided information, one has the strength to hop on to the Web Programming train. If you have no knowledge whatsoever in any of the mentioned technologies, don’t worry. I intend to write articles on all of them. Leave a comment below, i may consider your request.

Please enjoy coding!

 

What to know about JavaScript

A computer program is simply a series of instructions intended to tell the computer how to perform a task. Unlike humans, computers only understand 1s and 0s. Machine codes and assembly language are low-level programming languages that are closely related to the machine’s hardware and architecture.

Alternatively, high-level programming languages such as C,C++, C# or Java allow abstraction codes to used, making code easier for humans to write and read .These are compiled to assembly language codes or machine codes to be executed.

High-level languages which are translated to machine codes at run time are referred to as scripting languages. An example of such language is JavaScript.

JavaScript is mostly referred  to as the language of the Web. Nearly all browsers can run JavaScript. All that is needed to run it is a text editor and a web browser. It is very flexible and expressive that can be used to achieve very powerful application developments.

As a scripting language, it is compiled at run time. Most common JavaScript engines, responsible for interpreting programs and running them, can be found in browsers such as Chrome, Internet Explorer or Firefox. Many modern engines use a as Just-in-time (JIT) interpreting process to considerably speed up the compilation process. This makes the programs run faster.

The World Wide Web (WWW)  started as a bunch of pages linked together by hyperlinks. Borrowing other language elements including Java, Perl, Scheme, HyperTalk, AWK and Self, Brendan Eich developed a new language for the Netscape browser. It was originally called LiveScript but was later re-branded as JavaScript. The naming has often been a center for controversy that JavaScript is a lighter version of Java. Though the 2 share some syntax, they are unrelated.

In 1996, Microsoft reverse-engineered JavaScript to create JScript. Whiles JavaScript was shipped with Netscape’s Navigator browser, JScript was shipped with Internet Explorer version 3. At that time, Microsoft included another scripting language called VBScript with Internet Explorer.

Due to its early poor usage, Netscape and Sun Microsystems decided to standardize JavaScript along with the help of European Computer Manufacturers Association. This led to ECMAScript, the standardized version. Eventually, ECMAScript was often used to refer to the specification whereas JavaScript was still (and currently) used to refer to the languages itself.

In 2005, sophisticated sites such as Google Maps, Gmail demonstrated that JavaScript was capable of being used to create very powerful applications. Around that time, Asynchronous JavaScript and XML (AJAX) was coined by Jesse James Garret. This technique was used to obtain data from a server in the background and for updating only the relevant parts of the web page without refreshing the full page. This enabled more user interactivity. As a result, JavaScript gained popularity.

In 2008, engineers at Google developed the V8 engine to run within Chrome. It was significantly faster than earlier engines. Many vendors responded by increasing their JavaScript engines’ speed. Today, the pace of improvement with respect to JavaScript is on the rise with many modern browsers running JavaScript significantly faster.

Today, a big growth area for JavaScript is Single Page Applications. These applications run in the browser and rely heavily on JavaScript. An example of such a framework is Angular JS. Also, as the graphical capabilities of browsers are improving, the dawn of HTML 5 with the use of JavaScript is just awakening. JavaScript can also be used to develop browser extensions. It is even used as the scripting language for many non web-related applications.

JavaScript has a bright and nice future. As the web platform is evolving, JavaScript will remain a central part of its history and future.