Pitfalls to Avoid When Implementing Node.js and Containers

767

The use of containers and Node.js are on the rise as the two technologies are a good match for effectively developing and deploying microservice architectures. In a recent survey from the Node.js Foundation, the project found that 45 percent of developers that responded to the survey were using Node.js with this technology.

As more enterprises and startups alike look to implement these two technologies together, there are key questions that they need to ask before they begin their process and common pitfalls they want to avoid.

In advance of Node.js Interactive, to be held Nov. 29 through Dec. 2 in Austin, we talked with Ross Kukulinski, Product Manager at NodeSource, about the common pitfalls when implement Node.js with containers, how to avoid them, and what the future holds for both of these technologies.

Linux.com: Why is Node.js a good technology to use within next-generation architectures?

Ross Kukulinski: Node.js is an excellent technology to use within next-generation applications because it enables technical innovation through rapid development, microservice architectures, and flexible horizontal scaling.

Cloud computing and cloud-native applications have accelerated through the use of open source software, and Node.js is particularly well suited to thrive in this environment due to the extensive open-source npm package ecosystem that allows it to quickly build complex applications.

From a containerization standpoint, Node.js and containers both excel in three key areas: performance, packaging, and scalability.

Node.js has low-overhead and is a highly performant web application development platform that can handle large-scale traffic with ease. Developers build with joy and can own the entire application deployment lifecycle when they are enabled with DevOps methodologies, such as continuous integration and continuous deployment.

For packaging, Node.js has a dependency manifest definition (package.json) that leverages the extensive module ecosystem to snap-together functional building blocks. Similarly, containers have a build-once-run-anywhere nature that is defined by an explicit definition file (Dockerfile). Pairing these two together helps to eliminate the “it runs on my machine, so it’s not my fault that it doesn’t work in production” problem.

Finally, and perhaps most importantly, Node.js and containers can handle an impressive request load through the use of horizontal scaling. Both scale at the process level and are fast-to-boot, which means that operations teams can automatically scale up/down applications independently to handle today’s dynamic workloads.

Linux.com: What are some common pitfalls that users experience when getting started with Node.js and Docker and Kubernetes?

Ross Kukulinski: By far, the most common pitfall I see is people abusing containers by treating them like virtual machines. I routinely see teams with Node.js Dockerfiles with the kitchen sink installed: ubuntu, nginx, pm2, nodejs, monit, supervisord, redis, etc., which causes numerous problems. This results in HUGE container image sizes — often over a gigabyte when they should be ~50-200MB. Large image sizes translate to slow deploys and frustrated developers.

In addition, these kitchen sink containers facilitate anti-patterns, which can cause problems later down the road. A prime example would be the use of a process manager (e.g., supervisord, pm2, etc.) inside of a container.

In the event that your Node.js application crashes — you want it to automatically restart. On traditional Linux systems, this is done using a process manager. Running a process manager within a container will correctly restart your Node.js application if it crashes. The problem is, the container runtime (e.g. Docker) does not have visibility to the internal process manager.  So, the container runtime does not know that your application is crashing or having problems.  

When your team inspects the system to see what’s running, by using docker ps or kubectl get pods, for example, the container runtime will report that everything is up and running when in fact your application is crashing.

Finally, shoving everything into one container defeats one of the important features of containers: scaling at the process or application level. In other words, teams should be able to scale any one process type independently of the others. In our example above, we should be able to scale the nginx proxy/cache separately from the Node.js process depending on where our current performance bottleneck is. One of the underlying premises of cloud-native architectures is to enable flexible horizontal scaling.

Linux.com How best can you avoid these pitfalls?

Ross Kukulinski: Before starting down the containerization process, be sure to understand what your business, technology, and process goals are. You should also be thinking about what comes after you containerize your applications.  

A Docker image is just the first step — how do you run, manage, secure, and scale your containers? Is your release process automated with continuous integration and/or continuous deployment? These are all questions that you need to be thinking about while you’re working through the containerization process.

From an organizational point of view, I would encourage management and decision makers to look beyond just “containerizing-all-the-things” and take a holistic approach to their software development, QA, release, and culture.

For developers and operations teams, remember that containers are NOT virtual machines.  If you’re looking for best-practices when containerizing Node.js, I highly recommend reviewing these resources:

Linux.com: What do you think is in store for the future of containers and Node.js? Any new interesting tech on the horizon that you think will further help these technologies?

Ross Kukulinski: I think we’ll continue to see healthy competition and increased feature parity between the major container providers. While they certainly are competing for market share, each of the major container technology ecosystems (Docker, Kubernetes, Nomad, and Mesos) have a core focus. For example, Docker has focused heavily on the developer story, while Kubernetes has nailed the production-grade deployment & scaling aspects. To that end, I think it’s important for businesses looking to adopt these technologies to find the right tool for them.

In terms of Node.js, I think we’ll continue to see increased adoption of containerized Node.js — especially as more and more companies embrace release patterns that allow them to deliver software quicker and more efficiently at scale. Node.js as a development platform enables rapid, iterative development and great scalability while also leveraging the most popular programming language in the world: JavaScript. I do think we will see an increasingly number of polyglot architectures, so expect to see Node.js paired with languages like Go to deliver a comprehensive tool set.

While I’m always experimenting with new technologies and tracking industry trends, I think the one I’m most intrigued by is the so-called “Serverless” paradigm. I’ve certainly heard plenty of horror-stories, especially relating to poor developer workflows, debugging tools, and monitoring systems. As this tooling ecosystem improves, however, I expect we’ll see Node.js used increasingly often in Serverless deployments for certain technological needs.

Where companies will get into trouble, however, is if they go all-in on Serverless. As with most things, Serverless is will not be a silver bullet that solves all of our problems.

View the full schedule to learn more about this marquee event for Node.js developers, companies that rely on Node.js, and vendors. Or register now for Node.js Interactive.