As more developers turn to a microservices architecture for building web-scale applications, data centers must evolve to meet new networking requirements.
Servers.com’s hosting and cloud service reflects this new approach to software development by offering a uniquely secure and performant combination of public and private networking to its customers, according to Nick Dvas, Servers.com Project Manager.
“Once an application goes beyond a single server, private networking is required,” Dvas said. “E-commerce web sites tend to scale out, and require private networking for this. The same is true for online game developers — they can have enormous data flows in the private network.”
Servers.com’s 1,500 Gbps wide-area network includes a private multi-homed backbone, with connections to major Internet Exchanges (IXs) and Internet Service Providers (ISPs), designed for low latency, fewest hops, and stable round-trip times. Inside its data centers, each server has a 20-Gbps connection to the company’s private network, and a 20-Gbps connection to its public network.
With two locations in the United States and Europe — housing more than 2,500 servers on the launch and continuously increasing the stock, Servers.com provides responsive, scalable hosting and cloud service for high-load websites, e-commerce, developers, and data-intensive customers of any kind.
Servers.com is an “all under one roof” service from XBT Holding S.A., combining XBT’s global hosting and network solutions offerings together so customers can browse, mix, compare, and choose through a single source. Their services are uniquely built from the ground-up for next-generation software architectures.
“Early systems for web applications were ‘tightly built’ — monolithic,” says Dvas. “Today, by contrast, software systems tend to be more ‘loosely’ built. A large application consists of a collection of ‘microservices,’ implemented at the infrastructure level as either containers or virtual machines, interacting with each other.”
“Instead of one server scaling up, a customer’s application has dozens of small servers scaling out. And many of these servers exchange data very intensively.”
At the same time, says Dvas, “We are having more and more powerful server hardware, capable of running hundreds of virtual machines and containers — and able to use more and more network throughput. Microservices may be running on separate physical servers, in different racks, even physically distributed, with applications, databases, storage and other pieces running at two or more of the company’s international data center locations.
The network requirements to support these web applications include, according to Dvas:
-
“A stable network — we don’t want the internal interactions between different components on the system to be broken.”
-
“Low latency, because microservices require fast interactions between the components.”
-
“We expect the network to have a high throughput, because we have a high data exchange.”
-
“Security is a must for the network and users should be confident that the information being exchanged, say, between an application and a database, does not leave the customer’s private network, and will not be shared with public systems.”
Part of Servers.com’s solution is private networking, between and inside its data centers, in addition to its public network. “A private network for a customer is essential for almost any application,” says Dvas. “For example, connecting the data store to the cache server, for databases replicating with each other, and for internal monitoring, orchestration, and management of various sub-systems.”
Even for a basic web site that includes a database and an application, “If you put them both on the same server, not only will they fight over resources of a single server, your database is vulnerable to possible attacks because it is on a web-facing computer,” Dvas points out. “And when you decide to put them on different servers, you don’t want data between them going over the public network. First, because your data will be insecure. And second, because data flows between the web app and the database can be much greater than data flows to the outer world — and you don’t want to pay for this usage.”
Servers.com’s private network is isolated on a hardware level from its public network connections, Dvas notes. “For its private network, Servers.com has dedicated switches and equipment. This means that no one from the public networks can access our customers’ private networking. And each of our private networks is isolated on a logical level as well, so they are secure from each other.”
To help keep latency due to traversing network cables to a minimum, “we try to have each customer’s servers as physically close to each other as possible,” notes Dvas.
To assure network stability and availability, “Both our private and public networks are redundant on every layer, with no single point-of-failure,” says Dvas.
One way that the company is looking to improve its private networking functionality even further, says Dvas, is by providing the access at OSI Level 2 (Data Link), rather than the current access, which is Level 3 (Network Layer). “We are studying ways to do this,” says Dvas. “It’s a requirement for clustering tools like Xen Cloud Platform, Jelastic Platform-as-Infrastructure, and VMware.
“Throughput and latency in our private network is definitely better than the market average,” claims Dvas. “Network stability is more important than anything else. You can survive longer latency, but if it is not stable it cannot be used.”