KubeCon, November 17, 2020 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced it will host the Servo web engine. Servo is an open source, high-performance browser engine designed for both application and embedded use and is written in the Rust programming language, bringing lightning-fast performance and memory safety to browser internals. Industry support for this move is coming from Futurewei, Let’s Encrypt, Mozilla, Samsung, and Three.js, among others.
“The Linux Foundation’s track record for hosting and supporting the world’s most ubiquitous open source technologies makes it the natural home for growing the Servo community and increasing its platform support,” said Alan Jeffrey, Technical Chair of the Servo project. “There’s a lot of development work and opportunities for our Servo Technical Steering Committee to consider, and we know this cross-industry open source collaboration model will enable us to accelerate the highest priorities for web developers.”
The Linux Foundation is home to many of the world’s most important open source projects, and also home to many of the top open source experts. Our instructor-led training courses are taught by hands-on practitioners who have used, built, and contributed to these projects for years. Instructor-led courses work differently to eLearning courses in that they take place at a specific time and are led by a teacher in real-time. The courses typically involve 3-4 full, consecutive days of instructional and lab time, meaning you can complete the training quickly and in a highly structured format. Having a live instructor also means you have the opportunity to ask questions and interact in real-time.
To increase access to this training, through November 24 all instructor-led training courses are discounted by 30-50%!
The Linux Foundation, the nonprofit organization enabling mass innovation through open source, and Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud-native software, today announced the Certified Kubernetes Security Specialist (CKS), previously announced to be in development in July, is now generally available.
CKS is a two-hour, performance-based certification exam that provides assurance that a certificant has the skills, knowledge, and competence on a broad range of best practices for securing container-based applications and Kubernetes platforms during build, deployment, and runtime. The exam is taken remotely with a live proctor monitoring via webcam and screen sharing. Candidates for CKS must hold a current Certified Kubernetes Administrator (CKA) certification to demonstrate they possess sufficient Kubernetes expertise before sitting for the CKS. The certification remains valid for two years from the date it is awarded.
“The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.“
James Lewis and Martin Fowler (2014) [6]
Introduction
It is expected that in 2020, the global cloud microservices market will grow at a rate of 22.5%, with the US market projected to maintain a growth rate of 27.4% [5]. The tendency is that developers will move away from locally hosted applications and shift into the cloud. Consequently, this will help businesses minimize downtime, optimize resources, and reduce infrastructure costs. Experts also predict that by 2022, 90% of all applications will be developed using microservices architecture [5]. This article will help you to learn what microservices are and how companies have been using it nowadays.
What are microservices?
Microservices have been widely used around the world. But what are microservices? Microservice is an architectural pattern in which the application is based on many small interconnected services. They are based on the single responsibility principle, which according to Robert C. Martin is “gathering things that change for the same reason, and separate those things that change for different reasons” [2]. The microservices architecture is also extended to the loosely coupled services that can be developed, deployed, and maintained independently [2].
Moving away from monolithic architectures
Microservices are often compared to traditional monolithic software architecture. In a monolithic architecture, a software is designed to be self-contained, i.e., the program’s components are interconnected and interdependent rather than loosely coupled. In a tightly-coupled architecture (monolithic), each component and its associated components must be present in order for the code to be executed or compiled [7]. Additionally, if any component needs to be updated, the whole application needs to be rewritten.
That’s not the case for applications using the microservices architecture. Since each module is independent, it can be changed without affecting other parts of the program. Consequently, reducing the risk that a change made to one component will create unanticipated changes in other components.
Companies might run in trouble if they cannot scale a monolithic architecture if their architecture is difficult to upgrade or the maintenance is too complex and costly [4]. Breaking down a complex task into smaller components that work independently from each other is the solution to this problem.
How developers around the world build their microservices
Microservices are well known for improving scalability and performance. However, are those the main reasons that developers around the world build their microservices? The State of Microservices 2020 research project [1] has found out how developers worldwide build their microservices and what they think about it. The report was created with the help of 660 microservice experts from Europe, North America, Central and South America, the Middle East, South-East Asia, Australia, and New Zealand. The table below presents the average rating on questions related to the maturity of microservices [1].
Category
Average rating (out of 5)
Setting up a new project
3.8
Maintenance and debugging
3.4
Efficiency of work
3.9
Solving scalability issues
4.3
Solving performance issues
3.9
Teamwork
3.9
As observed on the table, most experts are happy with microservices for solving scalability issues. On the contrary, maintenance and debugging seem to be a challenge for them.
According to their architecture’s leading technologies, most experts reported that they use Javascript/Typescript (almost ⅔ of microservices are built on those languages). In the second place, they mostly use Java.
Although there are plenty of options to choose to deploy microservices, most experts use Amazon Web Services (49%), followed by their own server. Additionally, 62% prefer AWS Lambda as a serverless solution.
Most microservices used by the experts use HTTP for communication, followed by events and gRPC. Additionally, most experts use RabbitMQ for message-brokers, followed by Kafka and Redis.
Also, most people work with microservices continuous integration (CI). In the report, 87% of the respondents use CI solutions such as GitLab CI, Jenkins, or GitHub Actions.
The most popular debugging solution among 86% of the respondents was logging, in which 27% of the respondents ONLY use logs.
Finally, most people think that microservice architecture will become either a standard for more complex systems or backend development.
Successful use cases of Microservices
Many companies have changed from a monolithic architecture to microservices.
Amazon
In 2001, development delays, coding challenges, and service interdependencies didn’t allow Amazon to address its growing user base’s scalability requirements. With the need to refactor their monolithic architecture from scratch, Amazon broke its monolithic applications into small, independent, and service-specific applications [3][9].
In 2001, Amazon decided to change to microservices, which was years before the term came into fashion. This change led Amazon to develop several solutions to support microservices architectures, such as Amazon AWS. With the rapid growth and adaptation to microservices, Amazon became the most valuable company globally, valued by market cap at $1.433 trillion on July 1st, 2020 [8].
Netflix
Netflix started its movie-streaming service in 2007, and by 2008 it was suffering scaling challenges. They experienced a major database corruption, and for three days, they could not ship DVDs to their members [10]. This was the starting point when they realized the need to move away from single points of failure (e.g., relational databases) towards a more scalable and reliable distributed system in the cloud. In 2009, Netflix started to refactor its monolithic architecture into microservices. They began by migrating its non-customer-facing, movie-coding platform to run on the cloud as an independent microservices [11]. Changing to microservices allowed Netflix to overcome its scaling challenges and service outages. Despite that, it allowed them to reduce costs by having cloud costs per streaming instead of costs with a data center [10]. Today, Netflix streams approximately 250 million hours of content daily to over 139 million subscribers in 190 countries [11].
Uber
After launching Uber, they struggled to develop and launch new features, fix bugs, and rapidly integrate new changes. Thus, they decided to change to microservices, and they broke the application structure into cloud-based microservices. In other words, Uber created one microservice for each function, such as passenger management and trip management. Moving to microservices brought Uber many benefits, such as having a clear idea of each service ownership. This boosted speed and quality, facilitating fast scaling by allowing teams to focus only on the services they needed to scale, updating virtual services without disrupting other services, and achieving more reliable fault tolerance [11].
It’s all about scalability!
A good example of how to provide scalability is by looking at China. With its vast number of inhabitants, China had to adapt by creating and testing new solutions to solve new challenges at scale. Statistics show that China serves roughly 900 million Internet users nowadays [14]. During the 2019 Single’s Day (the equivalent of Black Friday in China), the peak transaction of Alibaba’s various shopping platforms was 544,00 transactions per second. The total amount of data processed on Alibaba Cloud was around 970 petabytes [15]. So, what’s the implication of these numbers of users in technology?
Many technologies have emerged from the need to address scalability.For example, Tars was created in 2008 by Tencent and contributed to the Linux Foundation in 2018. It’s being used at scale and enhanced for ten years [12]. Tars is open source, and many organizations are significantly contributing and extending the framework’s features and value [12]. Tars supports multiple programming languages, including C++, Golang, Java, Node.js, PHP, and Python; and it can quickly build systems and automatically generate code, allowing the developer to focus on the business logic to improve operational efficiency effectively. Tars has been widely used in Tencent’s QQ, WeChat social network, financial services, edge computing, automotive, video, online games, maps, application market, security, and many other core businesses. In March of 2020, the Tars project transitioned into the TARS Foundation, an open source microservice foundation to support the rapid growth of contributions and membership for a community focused on building an open microservices platform [12].
We at The Linux Foundation (LF) work to develop secure software in our foundations and projects, and we also work to secure the infrastructure we use. But we’re all human, and mistakes can happen.
So if you discover a security vulnerability in something we do, please tell us!
If you find a security vulnerability in the software developed by one of our foundations or projects, please report the vulnerability directly to that foundation or project. For example, Linux kernel security vulnerabilities should be reported to <security@kernel.org> as described in security bugs. If the foundation/project doesn’t state how to report vulnerabilities, please ask them to do so. In many cases, one way to report vulnerabilities is to send an email to <security@DOMAIN>.
If you find a security vulnerability in the Linux Foundation’s infrastructure as a whole, please report it to <security@linuxfoundation.org>, as noted on our contact page.
For example, security researcher Hanno Böck recently alerted us that some of the retired linuxfoundation.org service subdomains were left delegated to some cloud services, making them potentially vulnerable to a subdomain takeover. Once we were alerted to that, the LF IT Ops Team quickly worked to eliminate the problem and will also be working on a way to monitor and alert about such problems in the future. We thank Hanno for alerting us!
We’re also working to make open source software (OSS) more secure in general. The Open Source Security Foundation (OpenSSF) is a broad initiative to secure the OSS that we all depend on. Please check out the OpenSSF if you’re interested in learning more.
David A. Wheeler
Director, Open Source Supply Chain Security, The Linux Foundation
We at The Linux Foundation (LF) work to develop secure software in our foundations and projects, and we also work to secure the infrastructure we use. But we’re all human, and mistakes can happen.
So if you discover a security vulnerability in something we do, please tell us!
If you find a security vulnerability in the software developed by one of our foundations or projects, please report the vulnerability directly to that foundation or project. For example, Linux kernel security vulnerabilities should be reported to <security@kernel.org> as described in security bugs. If the foundation/project doesn’t state how to report vulnerabilities, please ask them to do so. In many cases, one way to report vulnerabilities is to send an email to <security@DOMAIN>.
If you find a security vulnerability in the Linux Foundation’s infrastructure as a whole, please report it to <security@linuxfoundation.org>, as noted on our contact page.
For example, security researcher Hanno Böck recently alerted us that some of the retired linuxfoundation.org service subdomains were left delegated to some cloud services, making them potentially vulnerable to a subdomain takeover. Once we were alerted to that, the LF IT Ops Team quickly worked to eliminate the problem and will also be working on a way to monitor and alert about such problems in the future. We thank Hanno for alerting us!
We’re also working to make open source software (OSS) more secure in general. The Open Source Security Foundation (OpenSSF) is a broad initiative to secure the OSS that we all depend on. Please check out the OpenSSF if you’re interested in learning more.
David A. Wheeler
Director, Open Source Supply Chain Security, The Linux Foundation