Home Blog Page 123

In the trenches with Thomas Gleixner, real-time Linux kernel patch set

Jason Perlow, Editorial Director at the Linux Foundation interviews Thomas Gleixner, Linux Foundation Fellow, CTO of Linutronix GmbH, and project leader of the PREEMPT_RT real-time kernel patch set.

JP: Greetings, Thomas! It’s great to have you here this morning — although for you, it’s getting late in the afternoon in Germany. So PREEMPT_RT, the real-time patch set for the kernel is a fascinating project because it has some very important use-cases that most people who use Linux-based systems may not be aware of. First of all, can you tell me what “Real-Time” truly means? 

TG: Real-Time in the context of operating systems means that the operating system provides mechanisms to guarantee that the associated real-time task processes an event within a specified period of time. Real-Time is often confused with “really fast.” The late Prof. Doug Niehaus explained it this way: “Real-Time is not as fast as possible; it is as fast as specified.”

The specified time constraint is application-dependent. A control loop for a water treatment plant can have comparatively large time constraints measured in seconds or even minutes, while a robotics control loop has time constraints in the range of microseconds. But for both scenarios missing the deadline at which the computation has to be finished can result in malfunction. For some application scenarios, missing the deadline can have fatal consequences.

In the strict sense of Real-Time, the guarantee which is provided by the operating system must be verifiable, e.g., by mathematical proof of the worst-case execution time. In some application areas, especially those related to functional safety (aerospace, medical, automation, automotive, just to name a few), this is a mandatory requirement. But for other scenarios or scenarios where there is a separate mechanism for providing the safety requirements, the proof of correctness can be more relaxed. But even in the more relaxed case, the malfunction of a real-time system can cause substantial damage, which obviously wants to be avoided.

JP: What is the history behind the project? How did it get started?

TG: Real-Time Linux has a history that goes way beyond the actual PREEMPT_RT project.

Linux became a research vehicle very early on. Real-Time researchers set out to transform Linux into a Real-Time Operating system and followed different approaches with more or less success. Still, none of them seriously attempted a fully integrated and perhaps upstream-able variant. In 2004 various parties started an uncoordinated effort to get some key technologies into the Linux kernel on which they wanted to build proper Real-Time support. None of them was complete, and there was a lack of an overall concept. 

Ingo Molnar, working for RedHat, started to pick up pieces, reshape them and collect them in a patch series to build the grounds for the real-time preemption patch set PREEMPT_RT. At that time, I worked with the late Dr. Doug Niehaus to port a solution we had working based on the 2.4 Linux kernel forward to the 2.6 kernel. Our work was both conflicting and complimentary, so I teamed up with Ingo quickly to get this into a usable shape. Others like Steven Rostedt brought in ideas and experience from other Linux Real-Time research efforts. With a quickly forming loose team of interested developers, we were able to develop a halfway usable Real-Time solution that was fully integrated into the Linux kernel in a short period of time. That was far from a maintainable and production-ready solution. Still, we had laid the groundwork and proven that the concept of making the Linux Kernel real-time capable was feasible. The idea and intent of fully integrating this into the mainline Linux kernel over time were there from the very beginning.

JP: Why is it still a separate project from the Mainline kernel today?

TG: To integrate the real-time patches into the Linux kernel, a lot of preparatory work, restructuring, and consolidation of the mainline codebase had to be done first. While many pieces that emerged from the real-time work found their way into the mainline kernel rather quickly due to their isolation, the more intrusive changes that change the Linux kernel’s fundamental behavior needed (and still need) a lot of polishing and careful integration work. 

Naturally, this has to be coordinated with all the other ongoing efforts to adopt the Linux kernel to the different use cases ranging from tiny embedded systems to supercomputers. 

This also requires carefully designing the integration so it does not get in the way of other interests and imposes roadblocks for further developing the Linux kernel, which is something the community and especially Linus Torvalds, cares about deeply. 

As long as these remaining patches are out of the mainline kernel, this is not a problem because it does not put any burden or restriction on the mainline kernel. The responsibility is on the real-time project, but on the other side, in this context, there is no restriction to take shortcuts that would never be acceptable in the upstream kernel.

The real-time patches are fundamentally different from something like a device driver that sits at some corner of the source tree. A device driver does not cause any larger damage when it goes unmaintained and can be easily removed when it reaches the final state bit-rot. Conversely, the PREEMPT_RT core technology is in the heart of the Linux kernel. Long-term maintainability is key as any problem in that area will affect the Linux user universe as a whole. In contrast, a bit-rotted driver only affects the few people who have a device depending on it.

JP: Traditionally, when I think about RTOS, I think of legacy solutions based on closed systems. Why is it essential we have an open-source alternative to them? 

TG: The RTOS landscape is broad and, in many cases, very specialized. As I mentioned on the question of “what is real-time,” certain application scenarios require a fully validated RTOS, usually according to an application space-specific standard and often regulatory law. Aside from that, many RTOSes are limited to a specific class of CPU devices that fit into the targeted application space. Many of them come with specialized application programming interfaces which require special tooling and expertise.

The Real-Time Linux project never aimed at these narrow and specialized application spaces. It always was meant to be the solution for 99% of the use cases and to be able to fully leverage the flexibility and scalability of the Linux kernel and the broader FOSS ecosystem so that integrated solutions with mixed-criticality workloads can be handled consistently. 

Developing real-time applications on a real-time enabled Linux kernel is not much different from developing non-real-time applications on Linux, except for the careful selection of system interfaces that can be utilized and programming patterns that should be avoided, but that is true for real-time application programming in general independent of the RTOS. 

The important difference is that the tools and concepts are all the same, and integration into and utilizing the larger FOSS ecosystem comes for free.

The downside of PREEMPT_RT is that it can’t be fully validated, which excludes it from specific application spaces, but there are efforts underway, e.g., the LF ELISA project, to fill that gap. The reason behind this is, that large multiprocessor systems have become a commodity, and the need for more complex real-time systems in various application spaces, e.g., assisted / autonomous driving or robotics, requires a more flexible and scalable RTOS approach than what most of the specialized and validated RTOSes can provide. 

That’s a long way down the road. Still, there are solutions out there today which utilize external mechanisms to achieve the safety requirements in some of the application spaces while leveraging the full potential of a real-time enabled Linux kernel along with the broad offerings of the wider FOSS ecosystem.

JP: What are examples of products and systems that use the real-time patch set that people depend on regularly?

TG: It’s all over the place now. Industrial automation, control systems, robotics, medical devices, professional audio, automotive, rockets, and telecommunication, just to name a few prominent areas.

JP: Who are the major participants currently developing systems and toolsets with the real-time Linux kernel patch set?  

TG: Listing them all would be equivalent to reciting the “who’s who” in the industry. On the distribution side, there are offerings from, e.g., RedHat, SUSE, Mentor, and Wind River, which deliver RT to a broad range of customers in different application areas. There are firms like Concurrent, National Instruments, Boston Dynamics, SpaceX, and Tesla, just to name a few on the products side.

RedHat and National Instruments are also members of the LF collaborative Real-Time project.

JP: What are the challenges in developing a real-time subsystem or specialized kernel for Linux? Is it any different than how other projects are run for the kernel?

TG: Not really different; the same rules apply. Patches have to be posted, are reviewed, and discussed. The feedback is then incorporated. The loop starts over until everyone agrees on the solution, and the patches get merged into the relevant subsystem tree and finally end up in the mainline kernel.

But as I explained before, it needs a lot of care and effort and, often enough, a large amount of extra work to restructure existing code first to get a particular piece of the patches integrated. The result is providing the desired functionality but is at the same time not in the way of other interests or, ideally, provides a benefit for everyone.

The technology’s complexity that reaches into a broad range of the core kernel code is obviously challenging, especially combined with the mainline kernel’s rapid change rate. Even larger changes happening at the related core infrastructure level are not impacting ongoing development and integration work too much in areas like drivers or file systems. But any change on the core infrastructure can break a carefully thought-out integration of the real-time parts into that infrastructure and send us back to the drawing board for a while.

JP:  Which companies have been supporting the effort to get the PREEMPT_RT Linux kernel patches upstream? 

TG: For the past five years, it has been supported by the members of the LF real-time Linux project, currently ARM, BMW, CIP, ELISA, Intel, National Instruments, OSADL, RedHat, and Texas Instruments. CIP, ELISA, and OSADL are projects or organizations on their own which have member companies all over the industry. Former supporters include Google, IBM, and NXP.

I personally, my team and the broader Linux real-time community are extremely grateful for the support provided by these members. 

However, as with other key open source projects heavily used in critical infrastructure, funding always was and still is a difficult challenge. Even if the amount of money required to keep such low-level plumbing but essential functionality sustained is comparatively small, these projects struggle with finding enough sponsors and often lack long-term commitment.

The approach to funding these kinds of projects reminds me of the Mikado Game, which is popular in Europe, where the first player who picks up the stick and disturbs the pile often is the one who loses.

That’s puzzling to me, especially as many companies build key products depending on these technologies and seem to take the availability and sustainability for granted up to the point where such a project fails, or people stop working on it due to lack of funding. Such companies should seriously consider supporting the funding of the Real-Time project.

It’s a lot like the Jenga game, where everyone pulls out as many pieces as they can up until the point where it collapses. We cannot keep taking; we have to give back to these communities putting in the hard work for technologies that companies heavily rely on.

I gave up long ago trying to make sense of that, especially when looking at the insane amounts of money thrown at the over-hyped technology of the day. Even if critical for a large part of the industry, low-level infrastructure lacks the buzzword charm that attracts attention and makes headlines — but it still needs support.

JP:  One of the historical concerns was that Real-Time didn’t have a community associated with it; what has changed in the last five years?  

TG: There is a lively user community, and quite a bit of the activity comes from the LF project members. On the development side itself, we are slowly gaining more people who understand the intricacies of PREEMPT_RT and also people who look at it from other angles, e.g., analysis and instrumentation. Some fields could be improved, like documentation, but there is always something that can be improved.

JP:  What will the Real-Time Stable team be doing once the patches are accepted upstream?

TG: The stable team is currently overseeing the RT variants of the supported mainline stable versions. Once everything is integrated, this will dry out to some extent once the older versions reach EOL. But their expertise will still be required to keep real-time in shape in mainline and in the supported mainline stable kernels.

JP: So once the upstreaming activity is complete, what happens afterward?

TG: Once upstreaming is done, efforts have to be made to enable RT support for specific Linux features currently disabled on real-time enabled kernels. Also, for quite some time, there will be fallout when other things change in the kernel, and there has to be support for kernel developers who run into the constraints of RT, which they did not have to think about before. 

The latter is a crucial point for this effort. Because there needs to be a clear longer-term commitment that the people who are deeply familiar with the matter and the concepts are not going to vanish once the mainlining is done. We can’t leave everybody else with the task of wrapping their brains around it in desperation; there cannot be institutional knowledge loss with a system as critical as this. 

The lack of such a commitment would be a showstopper on the final step because we are now at the point where the notable changes are focused on the real-time only aspects rather than welcoming cleanups, improvements, and features of general value. This, in turn, circles back to the earlier question of funding and industry support — for this final step requires several years of commitment by companies using the real-time kernel.

There’s not going to be a shortage of things to work on. It’s not going to be as much as the current upstreaming effort, but as the kernel never stops changing, this will be interesting for a long time.

JP: Thank you, Thomas, for your time this morning. It’s been an illuminating discussion.

To get involved with the real-time kernel patch for Linux, please visit the PREEMPT_RT wiki at The Linux Foundation or email real-time-membership@linuxfoundation.org

ELISA Project Welcomes Codethink, Horizon Robotics, Huawei Technologies, NVIDIA and Red Hat to its Global Ecosystem

SAN FRANCISCO – April 19, 2020 –  Today, the ELISA (Enabling Linux in Safety Applications) Project, an open source initiative that aims to create a shared set of tools and processes to help companies build and certify Linux-based safety-critical applications and systems, announced that Codethink, Horizon Robotics, Huawei Technologies, NVIDIA and Red Hat has joined its global ecosystem.

Linux is used in safety-critical applications with all major industries because it can enable faster time to market for new features and take advantage of the quality of the code development processes which decreases the issues that could result in loss of human life, significant property damage, or environmental damage. Launched in February 2019 by the Linux Foundation, ELISA will work with certification authorities and standardization bodies across industries to document how Linux can be used in safety-critical systems.

“Open source software has become a significant part of the technology strategy to accelerate innovation for companies worldwide,” said Kate Stewart, Vice President of Dependable Embedded Systems at The Linux Foundation. “We want to reduce the barriers to be able to use Linux in safety-critical applications and welcome the collaboration of new members to help build specific use cases for automotive, medical and industrial sectors.”

Milestones

After a little more than two years, ELISA has continued to see momentum in project and technical milestones. Examples include:

  • Successful Workshops: In February, ELISA hosted its 6th workshop with more than 120 registered participants. During the workshop, members and external speakers discussed cybersecurity expectations in the automotive world, code coverage of glibc and Intel’s Linux test robot. Learn more in this blog. The next workshop is scheduled for May 18-20 and is free to attend. Register here.
  • New Ambassador Program: In October 2020, ELISA launched a program with thought leaders with expertise in functional safety and Linux kernel development. These ambassadors are willing to speak at events, write articles and work directly with the community on mentorships or onboarding new contributors. Meet the ambassadors here
  • Mentorship Opportunities: The Linux Foundation offers a Mentorship Program with projects that are designed to help developers with the necessary skills to contribute effectively to open source communities. A recent program, ELISA participated in the Fall 2020 session with Code coverage metrics for GLibC and a Linux Kernel mentorship focused on CodeChecker. This project supports ELISA’s goals to gain experience in using various status analysis methods and tools available in the Linux kernel. Learn more here.
  • Working Groups: Since launch, the project has created several working groups that collaborate and work towards providing resources for System integrators to apply and use to analyze qualitatively and quantitatively on their systems. Current groups include an Automotive Working Group, Medical Devices Working Group, Safety Architecture Working Group,  Kernel Development Process Working Group and Tool Investigation and Code Improvement Sub-Working Group to focus on specific activities and goals. Learn more or join a working group here

“The primary challenge is selecting Linux components and features that can be evaluated for safety and identifying gaps where more work is needed to evaluate safety sufficiently,” said Shuah Khan, Chair of the ELISA Project Technical Steering Committee and Linux Fellow at the Linux Foundation. “We’ve taken on this challenge to make it easier for companies to build and certify Linux-based safety-critical applications by exploring potential methods to enable engineers to answer that question for their specific system.”

Learn more about the goals and technical strategy in this white paper

Growing Ecosystem

After a little more than two years, the ELISA Project has grown by 300%. With new members Codethink, Horizon Robotics, Huawei Technologies, NVIDIA and Red Hat, the project currently has 20 members that collaborate to define and maintain a standardized set of processes and tools that can be integrated into Linux-based, safety-critical systems seeking safety certification. These new members join BMW Car IT GmbH, Intel, Toyota, ADIT, AISIN AW CO., arm, Elektrobit, Kuka, Linuxtronix. Mentor, Suzuki, Wind River, Automotive Grade Linux and OTH Regensburg.

“Codethink has been working with ELISA for a few years and we are excited to continue our engagement as a member,” said Shaun Mooney, Division Manager at Codethink. “Open Source Software, particularly Linux, is being used more and more in safety applications and Codethink has been looking at how we can make software trustable for a long time. We’ve been working to understand how we can use complex software and guarantee it will function as we want it to. This problem needs to be tackled collectively and ELISA is a great place to collaborate with experts in both safety and software. We’ve been working with most of the working groups since the start of ELISA and will continue to be active participants, using our expert knowledge of Linux and Open Source to help advance the state of the art for safety.”

“Safety is the most important feature of a self-driving car,” said Huang Chang, co-founder and CTO of Horizon Robotics. “Horizon’s investment into functional safety is one of the most important ones we’ve ever made, and it provides a critical ingredient for automakers to bring self-driving cars to market. The creative safety construction the ELISA project is undertaking complements Horizon’s functional safety endeavor and continued commitment to certifying Linux-based safety-critical systems.”

“Huawei is one of the most important Linux kernel contributors and recently joined the automotive industry as strategic partner in Asia and Europe,” said Alessandro Biasci, Technical Expert at Huawei.“ We are pleased to further advance our mission and participate in ELISA, which will allow us to combine our experience in the Linux kernel development and knowledge in safety and security to bring Linux to safety-critical applications.”

“Edge computing extends enterprise software from the datacenter and cloud to a myriad of operational and embedded technology footprints that interact with the physical world, such as connected vehicles and manufacturing equipment,” said Chris Wright, Chief Technical Officer at Red Hat. “A common open source software platform across these locations simplifies and accelerates solution development, while supporting functional safety’s end goal of reducing the risk of physical injury. Red Hat recognizes the importance of establishing functional safety evidence and certifications for Linux, backed by a rich platform and vibrant ecosystem for safety-related applications. We are excited to bring our twenty-seven years of Linux expertise to the ELISA community’s work.”

For more information about ELISA, visit https://elisa.tech/.

About The Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

The post ELISA Project Welcomes Codethink, Horizon Robotics, Huawei Technologies, NVIDIA and Red Hat to its Global Ecosystem appeared first on Linux Foundation.

File transfer protocols: FTP vs SFTP  

You have both secure and non-secure choices for file transfer, and each can have different advantages in different situations.
Read More at Enable Sysadmin

Building containers by hand: The PID namespace

The PID namespace is an important one when it comes to building isolated environments. Find out why and how to use it.
Read More at Enable Sysadmin

WASI, Bringing WebAssembly Way Beyond Browsers

By Marco Fioretti

WebAssembly (Wasm) is a binary software format that all browsers can run directly, safely and at near-native speeds, on any operating system (OS). Its biggest promise, however, is to eventually work in the same way everywhere, from IoT devices and edge servers, to mobile devices and traditional desktops. This post introduces the main interface that should make this happen. The next post in this series will describe some of the already available, real-world implementations and applications of the same interface.

What is portability, again?

To be safe and portable, software code needs, as a minimum: 

  1. guarantees that users and programs can do only what they actually have the right to do, and only do it without creating problems to other programs or users
  2. standard, platform-independent methods to declare and apply those guarantees

Traditionally, these services are provided by libraries of “system calls” for each language, that is functions with which a software program can ask its host OS to perform some low-level, or sensitive task. When those libraries follow standards like POSIX, any compiler can automatically combine them with the source code, to produce a binary file that can run on some combination of OSes and processors.

The next level: BINARY compatibility

System calls only make source code portable across platforms. As useful as they are, they still force developers to generate platform-specific executable files, all too often from more or less different combinations of source code.

WebAssembly instead aims to get to the next level: use any language you want, then compile it once, to produce one binary file that will just run, securely, in any environment that recognizes WebAssembly. 

What Wasm does not need to work outside browsers

Since WebAssembly already “compiles once” for all major browsers, the easiest way to expand its reach may seem to create, for every target environment, a full virtual machine (runtime) that provides everything a Wasm module expects from Firefox or Chrome.

Work like that however would be really complex, and above all simply unnecessary, if not impossible, in many cases (e.g. on IoT devices). Besides, there are better ways to secure Wasm modules than dumping them in one-size-fits-all sandboxes as browsers do today.

The solution? A virtual operating system and runtime

Fully portable Wasm modules cannot happen until, to give one practical example, accesses to webcams or websites can be written only with system calls that generate platform-dependent machine code.

Consequently, the most practical way to have such modules, from any programming language, seems to be that of the WebAssembly System interface (WASI) project: write and compile code for only one, obviously virtual, but complete operating system.

On one hand WASI gives to all the developers of Wasm runtimes one single OS to emulate. On the other, WASI gives to all programming languages one set of system calls to talk to that same OS.

In this way, even if you loaded it on ten different platforms, a binary Wasm module calling a certain WASI function would still get – from the runtime that launched it – a different binary object every time. But since all those objects would interact with that single Wasm module in exactly the same way, it would not matter!

This approach would work also in the first use case of WebAssembly, that is with the JavaScript virtual machines inside web browsers. To run Wasm modules that use WASI calls, those machines should only load the JavaScript versions of the corresponding libraries.

This OS-level emulation is also more secure than simple sandboxing. With WASI, any runtime can implement different versions of each system call – with different security privileges – as long as they all follow the specification. Then that runtime could place every instance of every Wasm module it launches into a separate sandbox, containing only the smallest, and least privileged combination of functions that that specific instance really needs.

This “principle of least privilege”, or “capability-based security model“, is everywhere in WASI. A WASI runtime can pass into a sandbox an instance of the “open” system call that is only capable of opening the specific files, or folders, that were pre-selected by the runtime itself. This is a more robust, much more granular control on what programs can do than it would be possible with traditional file permissions, or even with chroot systems.

Coding-wise, functions for things like basic management of files, folders, network connections or time are needed by almost any program. Therefore the corresponding WASI interfaces are designed as similar as possible to their POSIX equivalents, and all packaged into one “wasi-core” module, that every WASI-compliant runtime must contain.

A version of the libc standard C library, rewritten usi wasi-core functions, is already available and, according to its developers, already “sufficiently stable and usable for many purposes”. 

All the other virtual interfaces that WASI includes, or will include over time, are standardized and packaged as separate modules,  without forcing any runtime to support all of them. In the next article we will see how some of these WASI components are already used today.

The post WASI, Bringing WebAssembly Way Beyond Browsers appeared first on Linux Foundation – Training.

What we learned from our survey about returning to in-person events

Recently, the Linux Foundation Events team sent out a survey to past attendees of all events from 2018 through 2021 to get their feedback on how they feel about virtual events and gauge their thoughts on returning to in-person events. We sent the survey to 69,000 people and received 972 responses. 

The enclosed PDF document summarizes the results of that survey. Click on the embedded image to see the page advance controls.

LF-Events-surveyApril2021

Ultimately the good news here is that a healthy number of people feel comfortable traveling this year for events, especially domestically in the US. The results also show that about 1/4 of respondents like virtual events, and the vast majority of people who told us that they had attended in-person events before — another reason to keep a hybrid format moving forward.

The post What we learned from our survey about returning to in-person events appeared first on Linux Foundation.

How to resize a logical volume with 5 simple LVM commands

It’s easy to add capacity to logical volumes with a few simple commands.
Read More at Enable Sysadmin

Static and dynamic IP address configurations: DHCP deployment

Configure a DHCP server and scope to provide dynamic IP address configurations to your network subnet.
Read More at Enable Sysadmin

Static and dynamic IP address configurations for DHCP

IP address configurations are critical, but what is the difference between static and dynamic addressing, and how does DHCP come into play?
Read More at Enable Sysadmin

Charting the Path to a Successful IT Career

So, you’ve chosen to pursue a career in computer science and information technology – congratulations! Technology careers not only continue to be some of the fastest growing today, but also some of the most lucrative. Unlike many traditional careers, there are multiple paths to becoming a successful IT professional. 

What credentials do I need to start an IT career?

While certain technology careers, such as research and academia, require a computer science degree, most do not. Employers in the tech industry are typically more concerned with ensuring you have the required skills to carry out the responsibilities of a given role. 

What you need is a credential that demonstrates that you possess the practical skills to be successful; independently verifiable certifications are the best way to accomplish this. This is especially true when you are just starting out and do not have prior work experience. 

We recommend the Linux Foundation Certified IT Associate (LFCA) as a starting point. This respected certification demonstrates expertise and skills in fundamental information technology functions, especially in cloud computing, which is something that has not traditionally been included in entry-level certifications, but has become an essential skill regardless of what further specialization you may pursue.

How do I prepare for the LFCA?

The LFCA tests basic knowledge of fundamental IT concepts. It’s good to keep in mind which topics will be covered on the exam so you know how to prepare. The domains tested on the LFCA, and their scoring weight on the exam, are:

  • Linux Fundamentals – 20%
  • System Administration Fundamentals – 20%
  • Cloud Computing Fundamentals – 20%
  • Security Fundamentals – 16%
  • DevOps Fundamentals – 16%
  • Supporting Applications and Developers – 8%

Of course if you are completely new to the industry, no one expects you to be able to pass this exam without spending some time preparing. Linux Foundation Training & Certification offers a range of free resources that can help. These include free online courses covering the topics on the exam, guides, the exam handbook and more. We recommend taking advantage of these and the countless tutorials, video lessons, how-to guides, forums and more available across the internet to build your entry-level IT knowledge. 

I’ve passed the LFCA exam, now what?

Generally, LFCA alone should be sufficient to qualify for many entry-level jobs in the technology industry, such as a junior system administrator, IT support engineer, junior DevOps engineer, and more. It’s not a bad idea to try to jump into the industry at this point and get some experience.

If you’ve already been working in IT for a while, or you want to aim for a higher level position right off the bat, you will want to consider more advanced certifications to help you move up the ladder. Our 2020 Open Source Jobs Report found the majority of hiring managers prioritize candidates with relevant certifications, and 74% are even paying for their own employees to take certification exams, up from 55% only two years earlier, showing how essential these credentials are. 

We’ve developed a roadmap that shows how coupling an LFCA with more advanced certifications can lead to some of the hottest jobs in technology today. Once you have determined your career goal (if you aren’t sure, take our career quiz for inspiration!), this roadmap shows which certifications from across various providers can help you achieve it. 

Download full size version

How many certifications do I really need?

This is a difficult question to answer and really varies depending on the specific job and its roles and responsibilities. No one needs every certification on this roadmap, but you may benefit from holding two or three depending on your goals. Look at job listings, talk to colleagues and others in the industry with more experience, read forums, etc. to learn as much as you can about what has worked for others and what specific jobs or companies may require. 

The most important thing is to set a goal, learn, gain experience, and find ways to demonstrate your abilities. Certifications are one piece of the puzzle and can have a positive impact on your career success when viewed as a component of overall learning and upskilling. 

Want to learn more? See our full certification catalog to dig into what is involved in each Linux Foundation certification, and suggested learning paths to get started!

The post Charting the Path to a Successful IT Career appeared first on Linux Foundation – Training.