Official Blog of Azilen

The Techie Explorations

Should You Containerize your Front-end Apps?

We all have heard about the container and the benefits they provide. Till now I was convinced that this should be used in the backend only and not frontend. The main reason was that all frameworks like Svelte, ReactJS, Angular, and VueJS convert the code into normal Javascript files and these files can be executed by your browser standalone without even needing a server. In this article, we will discuss two scenarios that compelled me to use containers in the frontend as well. Also, we will see an example of how to containerize a frontend application.

Two Main Reasons

Developer Working on Multiple Projects

In a real-life scenario, it is common to see a frontend developer working on multiple projects in a day. So, let’s imagine you are working on three different projects and you got a brand new laptop. Below are the steps that I presume you would do:

  1. Clone repositories
  2. Install dependencies (node_modules)

Then try to run the project and you get the error that node version that you are using for one project is not supported for another project. So what do you do in this case? Either upgrade or downgrade the version or have two Virtual Machines (VM) running two different versions of node. But running a gigantic VM will take a hit on your hardware — unless you are using the cloud resources.

One Server with Multiple Projects

This scenario is similar to the previous one but with a small difference. Here, the impact is on deployment and not development. As a developer, you might not care how your application would be deployed and whereas it is a worry for your DevOps team and not yours.

Generally, small organizations have on-premise servers to serve all the projects and on average we assume there must be at least 15–20 projects. So chances of having conflicts in different versions of dependencies — be it nodejs or anything else — would be more here when compared to the previous one. Consider you are in hurry to deploy your application to showcase the demo to your client and stumble upon this issue of different versions, what would you do?

  1. Will you uninstall the existing version? Of course not. You cannot do that else rest of the applications will break.
  2. Will you set up a new server to host all apps using the same version of node on one server? But it is a costlier option.

This is where containers shine. Though the containerization concept exists since a long time, you might not have paid attention to it as you didn’t need it till now. Let me show you how to containerize your frontend application.

Step 1: Install Docker Desktop

We will need Docker Desktop to build images and run the containers. You can download it from here. Just make sure you have enough RAM and disk space on your machine.

If you are using a Windows machine then you can run Windows Containers and Linux Containers as well. The reason for that is now Windows has support for mini Linux called WSL (Windows Subsystem for Linux). This is damn good if you want to run bash scripts. But in our case, we will use it to run Linux Containers. When you restart your machine after installing Docker Desktop, you might be prompted to install WSL. You will be redirected over here which will take you to the 4th step of the documentation. To ensure all went well, just open your command prompt and type docker and if your result looks as shown below then it means you are good to go.

Install Docker Desktop

Step 2: Create DockerHub Id

Like we push our code on GitHub so that anyone can download the source code, the same way we have DockerHub to save our images so that anyone can download the image and create docker instances from it. Then log in using the same id on your Docker Desktop.

Step 3: Create React App

Create React App

Step 4: Add a Dockerfile

This is the file where you define what contents would be added to your image. This file is NOT an image. Image is built from this file by docker using the build command.

Add a Dockerfile

Through this file, we are giving instructions to docker about how to construct the image.

  1. Install NodeJS in the container. We have used the alpine version so that it is lightweight. Also, change the version of it as per your needs.
  2. Setup the work directory. It means we have switched to ‘/app’ directory so that subsequent commands execute from that path.
  3. Then we are copying json and package-lock.json files to the directory. Do you see that small dot? That indicates the current directory which is /app.
  4. Then we install dependencies of our project in silent mode. This will help to download the libraries without printing the logs. It is optional.
  5. Finally, we execute the npm start

Step 5: Add .dockerignore file

Though this is not mandatory, I would recommend you to have this file since it will speed up the image build process and also keep the image lean by excluding unnecessary code/dependencies. It is just like .gitignore. In other words, it won’t send node_modules to Docker daemon.

 

Add .dockerignore file

Step 6: Build Image

Build Image

Here demo-app:dev is the <image name>:<tag>. You can give any name and tag. It must be followed by . (dot) which means that the path of the Docker build context and Dockerfile is the current folder. The process will take 2–3 minutes to complete.

Build Image

Step 7: Create Container

What is the difference between an image and a container? If you come from the OOPS world, then the below analogy will help you.

Class -> Image

Object -> Container

You can create as many containers as you want from a single image.

Create Container

Let’s break the above command:

  1. docker run command creates and runs a new container instance from the image called demo-app:dev
  2. -it starts the container in interactive mode. If we don’t provide it then react-scripts will exit after start-up which will cause the container to exit.
  3. — rm removes the container and volumes after the container exits
  4. -v ${PWD}:/app mounts the code into the container at /app
  5. -v /app/node_modules this is a volume where the container’s node_modules will be stored. Do not confuse with your local node_modules. Volume is nothing but data storage for that container. So even if you stop the container and restarts it, it will use the same node_modules rather than installing them again on every restart.
  6. -p 3001:3000 this is port forwarding. Port 3001 of your machine is forwarded to port 3000 of the container. This is important else you cannot access the app running in your container.
  7. -e CHOKIDAR_USEPOLLING=true enables hot-reloading in our application.

Currently, if you open localhost:3001 then you will see your app running in development mode. So, if you make any changes in your code, it will reflect those changes as well. This is possible due to the 7th step we performed above.

You can also run the above container in detached mode. Just add -d flag in the above command. When you do this, you will see that your command prompt is not waiting for any inputs unlike in the previous case.

If you want to stop your container then just use docker stop <container id>.

What About Production Build?

In the above steps, we saw how to create the container for the development environment. It won’t help you if you want to create a production build. For that, we need to create a separate file Dockerfile.prod. You can create different files for UAT, Staging, etc environments.

As it is a production build, we need some servers to serve our application. Here we are using nginx.

nginx

 

Now build the image using the below command:

docker build -f Dockerfile.prod -t demo-app:prod .

Once you have the image ready, spin up the container using the following command and access the application at localhost:3002.

docker run -it — rm -p 3002:80 demo-app:prod

Share Your Image

What if you want to share the images you just created with your teammates? Simple, just push them to Docker Hub. Login to hub.docker.com with the id you created in Step 2.

Create a repository where you can store all your images.

Share Your Image

 

To push your local image, you first need to name it using one of the below methods:

  • When you build them, using

docker build -t <hub-user>/<repo-name>[:<tag>]

  • By re-tagging an existing local image

docker tag <existing-image> <hub-user>/<repo-name>[:<tag>]

We will use the second method as we already have one. Use the below command:

docker tag demo-app:prod amitvchaudhary/reactapps:demo-app-prod

Now you will see two images as shown below.

Push the newly created image.

Format: docker push <hub-user>/<repo-name>:<tag>

Command: docker push amitvchaudhary/reactapps:demo-app-prod

You will be asked to login if you haven’t. Your image will be uploaded successfully and will appear on Docker Hub.

 

Anyone who wishes to download the above image just needs to pull it using this command: docker pull amitvchaudhary/reactapps:demo-app-prod

Conclusion

Containers don’t just help in making deployment easy but you can take advantage of scalability. In the next article, we will see how to use Kubernetes to manage multiple containers.

Leave a reply