Deploying Microservices to a Cluster with gRPC and Kubernetes

1898

Although it is true that microservices follow the UNIX philosophy of writing short compact programs that do one thing and do it well, and that they bring a lot of advantages to a framework (e.g., continuous deployment, decentralization, scalability, polyglot development, maintainability, robustness, security, etc.), getting thousands of microservices up and running on a cluster and correctly communicating with each other and the outside world is challenging. In this talk from Node.js Interactive, Sandeep Dinesh — a Developer Advocate at Google Cloud — describes how you can successfully deploy microservices to a cluster using technologies that Google developed: Kubernetes and gRPC.

To address the issues mentioned above, Google first developed Borg and Stubby. Borg was Google’s internal schedule manager. When Google decided to use containers 10 years ago, this was a new field, so they wrote their own stuff. Borg ended up scheduling every single application at Google, from small side projects to Google Search. Stubby, Google’s RPC framework, was used for communication between different services.

However, instead of putting Borg and Stubby on GitHub as open source projects, Google chose to write new frameworks from scratch in the open, with the open source community. The reason for this is that both Borg and Stubby were terribly written, according to Dinesh, and they were so tied to the Google’s internal infrastructure as to be unusable by the outside world.

That is how Kubernetes, the successor of Borg, and gRPC, a saner incarnation of Stubby, came to be.

Kubernetes

A common scenario while developing a microservice is to have your Docker container running your code on your local machine. Everything is fine until it is time to put it into production and you want to deploy your service on a cluster. That’s when complications arise: You have to ssh into a machine, run Docker, keep it up with nohup, etc., all of which is complicated and error-prone. The only thing you gain, according to Dinesh, is that you have made your development a little bit easier.

Kubernetes offers a solution in that it manages and orchestrates the containers on the cluster for you. You do not have to deal with machines anymore. Instead you interact with the cluster and the Kubernetes API.

It works like this: You dockerize your app and pass it on to Kubernetes in what’s called a Replication Controller. You tell Kubernetes that you need, say, four instances of your dockerized app running at the same time, and Kubernetes manages everything automatically. You don’t have to worry about on which machines your apps run. If one instance of your microservices crashes, Kubernetes will spin it back up. If a node in the cluster goes offline, Kubernetes automatically distributes the work to other nodes.

With random pods and containers spinning up on random computers, you need a layer on top that can route traffic to the correct Docker container on the correct machine. That is were Kubernetes’ services come into play. A Kubernetes service has a static IP address and a DNS host name that will route to a dynamic number of containers running on the system. It doesn’t matter if you are sending traffic to one app or a thousand — everything goes through your one service that distributes it to the containers in your cluster transparently.

All of that taken together, the dockerized app embedded in its replication controller, along with its service, is what in Kubernetes makes up one microservice. Obviously, you can run multiple microservices on one cluster, you can scale certain microservices up or down independently, or you can roll a microservice up to a new version, and, again, it will not affect other microservices.

gRPC

When you have multiple microservices running, communications between them becomes the most important part of your framework. According to Martin Fowler: The biggest issue in changing a monolith into microservices lies in changing the communication pattern.

Communication between microservices is done with Remote Procedure Calls (RPCs) and Google has 1010 RPCs per second. To help developers manage their RPCs they created gRPC.

gRPC supports multiple languages, including Python, C/C++, PHP, Java, Ruby and, of course, Node.js; and uses Protocol Buffers v3 to encapsulate data sent from microservice to microservice. Protocol Buffers are Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data. In many ways it is similar to XML, but smaller, faster, and simpler according to Google. “Protocol Buffers” is technically an interface definition language (or IDL) that allows you to define your data once and generate interfaces for any language. It implements a data model for structured request and response, and your data can be compressed into a wire format, a binary format for quick network transmission.

gRPC also uses HTTP/2, which is much faster than HTTP/1.1. HTTP/2 supports multiplexing, opening a single TCP connection and sending all the packages over that. HTTP/1.1 opens a new connection every single time it has to send a package, which adds a lot overhead. HTTP/2 also supports bidirectional streaming, which means you do not have to do polling, sockets, or Server Send Events, because it allows you to do bidirectional streaming on the same single TCP connection easily. Finally, HTTP/2 supports flow control, allowing you to solve congestion issues on your network should they occur.

Together, Kubernetes and gRPC, provide a comprehensive solution to the complexities involved in deploying a massive number of microservices to a cluster.

Watch the complete presentation below:

https://www.youtube.com/watch?v=xsIwYL-N4vI?list=PLfMzBWSH11xYaaHMalNKqcEurBH8LstB8

If you’re interested in speaking at or attending Node.js Interactive North America 2017 – happening October 4-6 in Vancouver, Canada – please subscribe to the Node.js community newsletter to keep abreast of dates and deadlines.