Home Blog Page 122

Video: New Online Courses for RISC-V

RISC-V is a free and open instruction set architecture (ISA) enabling a new era of processor innovation through open standard collaboration. To help individuals get started with the RISC-V, the Linux Foundation and RISC-V International have announced two new free online training courses through edX.org, the online learning platform founded by Harvard and MIT. Stephano Cetola, Technical Program Manager at RISC-V International, sat down with Swapnil Bhartiya, CEO of TFiR and host of video interviews at Linux.com, to talk about the new courses and who can benefit from it.

Have you ever racked a server? 

There are sysadmins who have to rack servers as part of their jobs while others have never stepped foot inside a chilly data center.
Read More at Enable Sysadmin

Video: A New Online Course For Node.js

The Linux Foundation and OpenJS Foundation recently released a new online training course targeted at Node.js community. The course is developed by a long-time member of the Node.js community, David Mark Clements who wears many hats. He is a Principal Architect, technical author, public speaker, and OSS creator specializing in Node.js and browser JavaScript. Clements joined Swapnil Bhartiya, CEO of TFiR and host of video interviews at Linux.com, to talk about the new course and who can benefit from it.

The Linux Foundation Hosts Forum to Share Linux Stories for 30th Anniversary

Linux community to share personal stories of how Linux has impacted their lives, thirty submissions to be highlighted for anniversary.

SAN FRANCISCO, April 22, 2021The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is recognizing World Penguin Day, May 25, by kicking off a global campaign to find out how Linux has most impacted people’s lives.  Everyone is invited to share. Thirty submissions will be randomly selected and highlighted in celebration of the 30th Anniversary of Linux, occurring this year.

In addition, The Linux Foundation will adopt thirty penguins, the animal synonymous with Linux, from the Southern African Foundation for the Conservation of Coastal Birds. Each of the thirty randomly chosen submitters will be able to name one of the adopted penguins and will have a certificate and picture of the penguin sent to them to mark this momentous time in Linux’s history as well as Linux’s impact on their own life. 

“Linux has changed the world and created innovation in incredibly diverse ways,” said Angela Brown, SVP & GM Events, The Linux Foundation.  “It has also had a huge impact on individuals’ lives. Open source is fundamentally about community, and we want to hear directly from the community about how Linux has impacted them personally. We can’t wait to hear stories from around the world, and more importantly, we look forward to sharing these stories and hope they inspire more people to join the community for the next 30 years of innovation and beyond.”

Those who would like to submit can do so here. Submissions are being accepted through May 9. The highlighted submissions will be selected in June and showcased in a blog post on events.linuxfoundation.org and on The Linux Foundation’s social media channels. Submitters chosen will also be notified before then by email.

About The Linux Foundation
Founded in 2000, The Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation projects are critical to the world’s infrastructure, including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

Follow The Linux Foundation on Twitter, Facebook, and LinkedIn for all the latest news, event updates and announcements.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage.

Linux is a registered trademark of Linus Torvalds.

####

Media Contact:

Kristin O’Connell
The Linux Foundation
koconnell@linuxfoundation.org

The post The Linux Foundation Hosts Forum to Share Linux Stories for 30th Anniversary appeared first on Linux Foundation.

Interview with Jory Burson, Community Director, OpenJS Foundation on Open Source Standards

Jason Perlow, Editorial Director of the Linux Foundation, chats with Jory Burson, Community Director at the OpenJS Foundation about open standardization efforts and why it is important for open source projects.

JP: Jory, first of all, thanks for doing this interview. Many of us know you from your work at the OpenJS Foundation, the C2PA, and on open standards, and you’re also involved in many other open community collaborations. Can you tell us a bit about yourself and how you got into working on Open Standards at the LF?

JB: While I’m a relatively new addition to the Linux Foundation, I have been working with the OpenJS foundation for probably three years now — which is hosted by the Linux Foundation. As some of your readers may know, OpenJS is home to several very active JavaScript open source projects, and many of those maintainers are really passionate about web standards. Inside that community, we’ve got a core group of about 20 people participating actively at Ecma International on the JavaScript TCs, the W3C, the Unicode Consortium, the IETF, and some other spaces, too. What we wanted to do was create this space where those experts can get together, discuss things in a cross-project sort of way, and then also help onboard new people into this world of web standards — because it can be a very intimidating thing to try and get involved in from the outside.

The Joint Development Foundation is something I’m new to, but as part of that, I’m very excited to get to support the C2PA, which stands for Coalition for Content Provenance and Authenticity; it’s a new effort as well. They’re going to be working on standards related to media provenance and authenticity — to battle fakes and establish trustworthiness in media formats, so I’m very excited to get to support that project as it grows.

JP: When you were at Bocoup, which was a web engineering firm, you worked a lot with international standards organizations such as Ecma and W3C, and you were in a leadership role at the TC53 group, which is JavaScript for embedded systems. What are the challenges that you faced when working with organizations like that?

JB: There are the usual challenges that I think face any international or global team, such as coordination of meeting times and balancing the tension between asynchronously conducting business via email lists, GitHub, and that kind of thing. And then more synchronous forms of communication or work, like Slack and actual in-person meetings. Today, we don’t really worry as much about the in-person meetings, but still, there’s like, this considerable overhead of, you know, “human herding” problems that you have to overcome.

Another challenge is understanding the pace at which the organization you’re operating in really moves. This is a complaint we hear from many people new to standardization and are used to developing projects within their product team at a company. Even within an open source project, people are used to things moving perhaps a bit faster and don’t necessarily understand that there are actually built-in checks in the process — in some cases, to ensure that everybody has a chance to review, everybody has an opportunity to comment fairly, and that kind of thing.

Sometimes, because that process is something that’s institutional knowledge, it can be surprising to newcomers in the committees — so they have to learn that there’s this other system that operates at an intentionally different pace. And how does that intersect with your work product? What does that mean for the back timing of your deliverables? That’s another category of things that is “fun” to learn. It makes sense once you’ve experienced it, but maybe running into it for the first time isn’t quite as enjoyable.

JP: Why is it difficult to turn something like a programming language into an internationally accepted standard? In the past, we’ve seen countless flavors of C and Pascal and things like that.

JB: That’s a really good question. I would posit that programming languages are some of the easier types of standards to move forward today because the landscape of what that is and the use cases are fairly clear. Everybody is generally aware of the concept that languages are ideally standardized, and we all agree that this is how this language should work. We’re all going to benefit, and none of us are necessarily, outside of a few cases, trying to build a market in which we’re the dominant player based solely on a language. In my estimation, that tends to be an easier case to bring lots of different stakeholders to the table and get them to agree on how a language should proceed.

In some of the cases you mentioned, as with C, and Pascal, those are older languages. And I think that there’s been a shift in how we think about some of those things, where in the past it was much more challenging to put a new language out there and encourage adoption of that language, as well as a much higher bar and much more difficult sort of task in getting people information out about how that language worked.

Today with the internet, we have a very easy distribution system for how people can read, participate, and weigh in on a language. So I don’t think we’re going to see quite as many variations in standardized languages, except in some cases where, for example, with JavaScript, TC53 is carving out a subset library of JavaScript, which is optimized for sensors and lower-powered devices. So long story short, it’s a bit easier, in my estimation, to do the language work. Where I think it gets more interesting and difficult is actually in some of the W3C communities where we have standardization activities around specific web API’s you have to make a case for, like, why this feature should actually become part of the platform versus something experimental…

JP: … such as for Augmented Reality APIs or some highly specialized 3D rendering thing. So what are the open standardization efforts you are actively working on at the LF now, at this moment?

JB: At this exact moment, I am working with the OpenJS Foundation standards working group, and we’ve got a couple of fun projects that we’re trying to get off the ground. One is creating a Learning Resource Center for people who want to learn more about what standardization activities really look like, what they mean, some of the terminologies, etc.

For example, many people say that getting involved in open source is overwhelming — it’s daunting because there’s a whole glossary of things you might not understand. Well, it’s the same for standardization work, which has its own entire new glossary of things. So we want to create a learning space for people who think they want to get involved. We’re also building out a feedback system for users, open source maintainers, and content authors. This will help them say, “here’s a piece of feedback I have about this specific proposal that may be in front of a committee right now.”

So those are two things. But as I mentioned earlier, I’m still very new to the Linux Foundation. And I’m excited to see what other awesome standardization activities come into the LF.

JP: Why do you feel that the Linux Foundation now needs to double down its open standards efforts?

JB: One of the things that I’ve learned over the last several years working with different international standards organizations is that they have a very firm command of their process. They understand the benefits of why and how a standard is made, why it should get made, those sorts of things. However, they don’t often have as strong a grasp as they ought to around how the software sausage is really made. And I think the Linux Foundation, with all of its amazing open source projects, is way closer to the average developer and the average software engineer and what their reality is like than some of these international standards developing boards because the SDOs are serving different purposes in this grander vision of ICT interoperability.

On the ground, we have, you know, the person who’s got to build the product to make sure it’s fit for purpose, make sure it’s conformant, and they’ve got to make it work for their customers. In the policy realm, we have these standardization folks who are really good at making sure that the policy fits within a regulatory framework, is fair and equitable and that everybody’s had a chance to bring concerns to the table — which the average developer may not have time to be thinking about privacy or security or whatever it might be. So the Linux Foundation and other open source organizations need to fit more of the role of a bridge-builder between these populations because they need to work together to make useful and interoperable technologies for the long term.

That’s not something that one group can do by themselves. Both groups want to make that happen. And I think it’s really important that the LF demonstrate some leadership here.

JP: Is it not enough to make open software projects and get organizations to use them? Or are open standards something distinctly different and separate from open source software?

JB: I think I’ll start by saying there are some pretty big philosophical differences in how we approach a standard versus an open source project. And I think the average developer is pretty comfortable with the idea that version 1.0 of an open source project may not look anything like version 2.0. There are often going to be cases and examples where there are breaking changes; there’s stuff that they shouldn’t necessarily rely on in perpetuity, and that there’s some sort of flex that they should plan for in that kind of thing.

The average developer has a much stronger sense with a standardization activity that those things should not change. And should not change dramatically in a short period. JavaScript is a good example of a language that changes every year; new features are added. But there aren’t breaking changes; it’s backward compatible. There are some guarantees in terms of a standard platform’s stability versus an open source platform, for example. And further, we’re developing more of a sense of what’s a higher bar, if you will, for open standards activities, including the inclusion of things like test suites, documentation, and the required number of reference implementations examples.

Those are all concepts that are kind of getting baked into the idea of what makes a good standard. There’s plenty of standards out there that nobody has ever even implemented — people got together and agreed how something should work and then never did anything with it. And that’s not the kind of standard we want to make or the kind of thing we want to promote.

But if we point to examples like JavaScript — here’s this community we have created, here’s the standard, it’s got this great big group of people who all worked on it together openly and equitably. It’s got great documentation, it’s got a test suite that accompanies it — so you can run your implementation against that test suite and see where the dragons lie. And it’s got some references and open source reference implementations that you can view.

Those sorts of things really foster a sense of trustworthiness in a standard — it gives you a sense that it’s something that’s going to stick around for a while, perhaps longer than an open source project, which may be sort of the beginnings of a standardization activity. It may be a reference to implementing a standard, or some folks just sort of throwing spaghetti at a wall and trying to solve a problem together. And I think these are activities that are very complementary with each other. It’s another great reason why other open source projects and organizations should be getting involved and supporting standardization activities.

JP: Do open standardization efforts make a case for open source software even stronger?

I think so — I just see them as so mutually beneficial, right? Because in the case of an open standards activity, you may be working with some folks and saying, well, here’s what I’m trying to express what this would look like — if we take the prose — and most of the time, the standard is written in prose and a pseudocode sort of style. It’s not something you can feed into the machine and have it work. So the open source projects, and polyfills, and things of that sort can really help a community of folks working on a problem say, “Aha, I understand what you mean!” “This is how we interpreted this, but it’s producing some unintended behaviors”, or “we see that this will be hard to test, or we see that this creates a security issue.”

It’s a way of putting your ideas down on paper, understanding them together, and having a tool through which everybody can pull and say, Okay, let’s, let’s play with it and see if this is really working for what we need it for.”

Yes, I think they’re very compatible.

JP: Like peanut butter and jelly.

JB: Peanut butter and jelly. Yeah.

JP: I get why large organizations might want things like programming languages, APIs, and communications protocols to be open standards, but what are the practical benefits that average citizens get from establishing open standards?

JB: Open standards really help promote innovation and market activity for all players regardless of size. Now, granted, for the most part, a lot of the activities we’ve been talking about are funded by some bigger players. You know, when you look at the member lists of some of the standards bodies, it’s larger companies like the IBMs, Googles, and Microsofts of the world, the companies that provide a good deal more of the funding. Still, hundreds of small and midsize businesses are also benefiting from standards development.

You mentioned my work at Bocoup earlier — that’s another great example. We were a consulting firm, who heavily benefited from participating in and leveraging open standards to help build tools and software for our customers. So it is a system that I think helps create an equitable market playing field for all the parties. It’s one of those actual examples of rising tides, which lift all boats if we’re doing it in a genuinely open and pro-competitive way. Now, sometimes, that’s not always the case. In other types of standardization areas, that’s not always true. But certainly, in our web platform standards, that’s been the case. And it means that other companies and other content authors can build web applications, websites, services, digital products, that kind of thing. Everybody benefits — whether those people are also Microsoft customers, Google customers, and all that. So it’s an ecosystem.

JP: I think it’s great that we’ve seen companies like Microsoft that used to have much more closed systems embrace open standards over the last ten years or so. If you look at the first Internet Explorer they ever had out — there once were websites that only worked on that browser. Today, the very idea of a website that only works on one company’s web browser correctly is ridiculous, right? We now have open source engines that these browsers use that embrace open standards have become much more standardized. So I think that open standards have helped some of these big companies that were more closed become more open. We even see it happen at companies like Apple. They use the Bluetooth protocol to connect to their audio hardware and have adopted technologies such as the USB-C connector when previously, they were using weird proprietary connectors before. So they, too, understand that open standards are a good thing. So that helps the consumer, right? I can go out and buy a wireless headset, and I know it’ll work because it uses the Bluetooth protocol. Could you imagine if we had nine different types of wireless networking instead of WiFi? You wouldn’t be able to walk into a store and buy something and know that it would work on your network. It would be nuts. Right?

JB: Absolutely. You’re pointing to hardware and the standards for physical products and goods versus digital products and goods in your example. So in using that example, do you want to have seven different adapters for something? No, it causes confusion and frustration in the marketplace. And the market winner is the one who’s going to be able to provide a solution that simplifies things.

That’s kind of the same thing with the web. We want to simplify the solutions for web developers so they’re not having to say, “Okay, what am I going to target? Am I going to target Edge? Am I going to target Safari?”

JP: Or is my web app going to work correctly in six years or even six months from now?

JB: Right!

JP: Besides web standards, are there other types of standardization you are passionate about, either inside the LF or in your spare time?

JB: It’s interesting because I think in my career, I’ve followed this journey of first getting involved because it was intellectually interesting to me. Then it was about getting involved because it was about  making my job easier. Like, how does this help me do business more effectively? How does this help me make my immediate life, life as a developer, and my life as an internet consumer a little bit nicer?

Beyond that, you start to think of the order of magnitude: our standardization activities’ social impact. I often think about the role that standards have played in improving the lives of everyday people. For the last 100 years, we have had building standards, fire standards, and safety standards, all of these things. And because they developed, adopted, and implemented in global policy, they have saved people’s lives.

Apply that to tech — of course, it makes sense that you would have safety standards to prevent the building from burning down — so what is the version of that for technology? What’s the fire safety standard for the web? And how do we actually think about the standards that we make, impacting people and protecting them the way that those other standards did?

One of the things that have changed in the last few years is that the Technical Advisory Group group or “TAG” at the W3C are considering more of the social impact questions in their work. TAG is a group of architects elected by the W3C membership to take a horizontal/global view of the technologies that the W3C standardizes. These folks say, “okay, great; you’re proposing that we standardize this API, have you considered it from an accessibility standpoint? Have you considered it from, you know, ease of use, security?” and that sort of thing.

In the last few years, they started looking at it from an ethical standpoint, such as, “what are the questions of privacy?” How might this technology be used for the benefit of the average person? And also, perhaps, how could it potentially be used for evil? And can we prevent that reality?

So one of the thingsI think is most exciting, is the types of technologies that are advancing today that are less about can we make X and Y interoperable, but can we make X and Y interoperable in a safe, ethical, economical, and ecological fashion — the space around NFT’s right now as a case in point. And can we make technology beneficial in a way that goes above and beyond “okay, great, we made the website, quick click here.”

So C2PA, I think, is an excellent example of a standardization activity that the LF supports could benefit people. One of the big issues of the last several years is the authenticity of media that we consume things from — whether it was altered, or synthesized in some fashion, such as what we see with deepfakes. Now, the C2PA is not going to be able to and would not say if a media file is fake. Rather, it would allow an organization to ensure that the media they capture or publish can be analyzed for tampering between steps in the edit process or the time an end user consumes it.  This would allow organizations and people to have more trust in the media they consume.

JP: If there was one thing you could change about open source and open standards communities, what would it be?

JB: So my M.O. is to try and make these spaces more human interoperable. With an open source project or open standards project, we’re talking about some kind of technical interoperability problem that we want to solve. But it’s not usually the technical issues that cause delays or serious issues — nine times out of ten; it comes down to some human interoperability problem. Maybe it’s language differences, cultural differences, or expectations — it’s process-oriented. There’s some other thing that may cause that activity to fail to launch.

So if there were something that I could do to change communities, I would love to make sure that everybody has resources for running great and effective meetings. One big problem with some of these activities is that their meetings could be run more effectively and more humanely. I would want humane meetings for everyone.

JP: Humane meetings for everyone! I’m pretty sure you could be elected to public office on that platform. <laughs>. What else do you like to do with your spare time, if you have any?

JB: I love to read; we’ve got a book club at OpenJS that we’re doing, and that’s fun. So, in my spare time, I like to take time to read or do a crossword puzzle or something on paper! I’m so sorry, but I still prefer paper books, paper magazines, and paper newspapers.

JP: Somebody just told me recently that they liked the smell of paper when reading a real book.

JB: I think I think they’re right; I think it feels better. I think it has a distinctive smell, but there’s also something very therapeutic and analog about it because I like to disconnect from my digital devices. So you know, doing something soothing like that. I also enjoy painting outdoors and going outside, spending time with my four-year-old, and that kind of thing.

JP: I think we all need to disconnect from the tech sometimes. Jory, thanks for the talk; it’s been great having you here.

The post Interview with Jory Burson, Community Director, OpenJS Foundation on Open Source Standards appeared first on Linux Foundation.

Magma Project Accelerates with Establishment of Magma Core Foundation and New Members Under Open Governance

  • Project embraces open governance model, creates new neutral, cross-community Technical Steering Committee open for collaboration
  • Community welcomes 11 new member organizations fostering innovation for 5G mobile packet core
  • Magma Core Foundation’s project roadmap integral to cross-community collaboration enabling end-to-end solutions and blueprints 

SAN FRANCISCOApril 21, 2021  Today, the Magma project, an open-source software platform that gives network operators an open, flexible and extendable mobile core network solution, announced project and community growth since its recent move to the Linux Foundation to establish a neutral governance framework. 

Since moving to the Linux Foundation, Magma has made strides as a community, in partnership with the Open Infrastructure Foundation and OpenAirInterface Software Alliance. The collaboration has formally become the Magma Core Foundation, and project and community growth includes new members, the adoption of a master architecture roadmap, and formation of a neutral governance structure. In addition, the community will host its first Linux Foundation-managed event, Magma Day, co-located with KubeCon + CloudNativeCon Europe 2021. 

“We are pleased to see the Magma Core Foundation continue to evolve as a leader in network innovation,” said Arpit Joshipura, general manager, Networking, Edge, and IoT, the Linux Foundation. “Additional collaboration efforts are underway via initiatives like the 5G Super Blueprint which enables communities to build and augment modern networks at scale across 5G, carrier Wi-Fi, private LTE, and more.” 

“The OpenAirInterface Software Alliance continues to participate in the Magma Core Foundation as a major contributor to the developments of the core network,” said Irfan Ghauri,  Director of Operations of the OpenAirInterface Software Alliance. “The seed code for one of the main components of the Magma core (MME) is in fact OAI. The fact that early implementations are making it into production improving users’ lives is in itself a great source of satisfaction for the OSA. The Alliance continues to contribute through its engineers in the entrails of the Magma core and looks forward to increased adoption of the latter, as greater stability and completeness is achieved over time. This is very hard work but the OSA remains committed to delivering the next features including non-stand alone support and others.”

“Since the early days of the Magma project, the OpenInfra Foundation and our global community have aligned with the community’s goals to connect the next billion people,” said Mark Collier,  COO of the Open Infrastructure Foundation and member of the Magma Core Foundation governing board. “We support the development of Magma to form a next-generation mobile networking stack that’s aligned with our mission to create open infrastructure code that runs in production. We’re excited to see more organizations coming on board to collaborate with us as we support that goal.”

The Magma Core Foundation welcomes 11 new member organizations across CSPs, processing, storage, edge, and more. New members 0chain, Aarna Networks, Connect5G, FreedomFi, GenXComm, Helium, Highway9Networks, MotoJeannie, Shoelace Wireless, Vapor IO, and Whitestack  join existing members, including Arm, Deutsche Telekom, and Facebook. The community will work collaboratively on the future of mobile network core solutions, via a new architecture roadmap that’s 3GPP generation and access network (cellular or WiFi) agnostic. It can flexibly support a radio access network with minimal development and deployment effort, and includes three major components: Access Gateway, Orchestrator, and Federation Gateway. 

To help shepherd this work, a new neutral governance structure, including a Technical Steering Committee (TSC), has been formed. Newly-elected TSC members  include Marie Bremner, Raphael Defosseus, Hunter Gatewood, Scott Moeller, and Pravin Shelar. 

Magma Day

Join the Magma Core Project community on May 3 from 2:30 – 6:00 pm CEST for a virtual Magma Day event. Co-located with KubeCon + CloudNativeCon Europe 2021, Magma Day  is designed to bring CNCF/Kubernetes, LF Networking, and LF Edge communities working across 4G, 5G, and global connectivity together.  Magma Day will include a comprehensive review of Magma (use cases, roadmap, vision, architecture) and how to build end-to-end telecom solutions using Magma across open source projects. Access the event schedule and register to add Magma Day to your KubeCon + CloudNativeCon Europe 2021 registration today.

Member support for the Magma Core Foundation 

0chain

“0Chain powered Magma enables siloed WiFi connectivity within businesses to form a seamless augmented network to enhance mobile user experience and reduce operator costs”, said Saswata Basu, CEO & Founder of 0Chain, world leader in blockchain and decentralized storage. “In addition, 0Chain dramatically cuts contract negotiation time from years to seconds, and provides dynamic pricing for augmented network providers.” 

Aarna Networks

“We are delighted to join the Magma Core project,” said Amar Kapadia, co-founder and CEO, Aarna Networks. “By integrating Magma Core with ONAP and Kubernetes, we plan to provide communication service providers, government organizations, and enterprises with a fully open source solution that could democratize and accelerate 5G deployments worldwide.”

Connect5G

“Magma is the one and truly pioneering project – providing open, unified and access convergent networking. We at Connect 5G believe that the future of the global communication lies in the open technology stacks. Our mission is to bring the rural and remote areas to the global network,” said Patrik Melander, chairman and CEO, Connect5G, Inc. “For that purpose we selected Magma as the one and truly pioneering project that provides open, unified and access convergent networking layer.”

FreedomFi

“Most common customer objection about any open source project is that it’s not enterprise ready. We’ve heard those objections about Linux and Kubernetes for years prior to those becoming a standard, and we’ve heard a lot of the same about Magma last year.” said Boris Renski, Co-Founder and CEO at FreedomFi. “This year we start seeing customers like Access Parks choosing Magma over a variety of open source and proprietary alternatives to power hundreds of cell sites across the national and state park system. We are quickly approaching the end of Magma-is-not-enterprise-ready cycle and are excited to collaborate with the Linux foundation to grow the project ecosystem.” 

Helium

“Helium started with a vision to enable wireless networks for IoT powered by the people with a new blockchain-based incentive model,” said Frank Mong, the COO of Helium Inc. “We’re excited to join the Linux Foundation and the Magma ecosystem to continue to make building all wireless networks possible by combining cryptocurrency, open source, and bringing access to more people globally.”

Highway9 Networks

“Magma significantly opens, modernizes and steers the mobility core stack. Highway9 Networks is excited to partner with the Magma community as we deliver innovative 5G ready edge cloud solutions to the enterprise,” said Allwyn Sequeira, Founder/CEO of Highway9 Networks

MotoJeannie

“Magma Core provides the necessary toolset that’s needed for the industry to innovate. At MotoJeannie, we use a curated form of Magma core, enabling us to focus on delivering the desired value to our end customers. Linux foundation knows how to develop values for the ecosystem using open source, and we are very excited to be part of this community,” said Auyush Sharma, founder and CEO, MotoJeannie. 

Shoelace Wireless

“Magma converged core provides cost effective cloud native orchestration of WiFi and LTE networks which is critical for Shoelace Wireless’ intelligent-edge multipath traffic steering, switching, and aggregation technology to enable use cases such as: network augmentation, smart contract roaming, predictive traffic steering, and HetNet optimization,” said Jim Mains, CEO, Shoelace Wireless.  “The fact that Magma is open-sourced also allows us to work with innovative partners to accelerate market deployment which otherwise would take many years.”

Vapor.io

“Open technologies like Magma will help revolutionize both US and global communications infrastructure,” said Cole Crawford founder & CEO of edge and grid infrastructure company Vapor IO. “We have always believed that neutral host multi-tenancy and shared infrastructure unlock the economics that enable the worldwide rollout of advanced networks like 5G. Vapor IO’s Kinetic services are ideal for Magma, and we look forward to working with our partners to implement and deploy it on our network.”

Resources

About the Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

The post Magma Project Accelerates with Establishment of Magma Core Foundation and New Members Under Open Governance appeared first on Linux Foundation.

The Linux Foundation Hosts Open19 to Accelerate Data Center and Edge Hardware Innovation

Open19 framework enables data center hardware design that powers edge, 5G and custom cloud deployments worldwide, brings both hardware and software under the Linux Foundation with fellow Yuval Bachar 

SAN FRANCISCO, April 21, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced it will host the Open19 Foundation, an open hardware platform for data center and edge hardware innovation. It is also announcing that one of the original founders of the Open19 project, Yuval Bachar, is joining the Linux Foundation to lead this effort. Project leadership includes premiere members Equinix and Cisco.

Open19 focuses on hardware standards that enable compute, storage and network manufacturers and end users to develop differentiated hardware solutions while protecting their competitive intellectual property. With the addition of Open19, The Linux Foundation is hosting data center hardware and software under one virtual roof.

“As the open hardware project of The Linux Foundation, the Open19 Project is dedicated to creating solutions that help digital businesses take advantage of specialized infrastructure,” said Zachary Smith, Open19 Foundation chairperson and Managing Director of Equinix Metal. “We are excited to join The Linux Foundation to solve the challenges facing modern data centers with collaborative, open, community-led innovation.”

Open19 provides a framework for accessing and deploying hardware innovation at any scale, from edge environments to large-scale custom clouds. With its unique intellectual property model and market-leading specifications with proven adoption, Open19 enables technology providers, supply chain partners, cloud service providers, telecoms and tech forward enterprises to leverage shared investments to address the exploding needs of modern compute and network deployments while minimizing risk. This reduces time to market for new solutions while substantially lowering the cost of operations.

“Open19 is revolutionizing the way we approach hardware,” said Yuval Bachar, Open 19 Foundation Fellow. “The time to invest in open hardware has never been more pressing. With the transformation happening as a result of AI, 5G and edge networking, in particular, the opportunity for innovation is ripe, and Open19 will accelerate it.”

Yuval Bachar founded the Open19 project and is returning to support the project and its community under the Linux Foundation. His career includes technical leadership roles at Microsoft, LinkedIn, Facebook and Cisco. Bachar has been on the forefront of some of the industry’s most important technology developments, from data center networking to data center self healing with Machine Learning, AI and predictive maintenance. Most recently, he was Principal Hardware Architect of the Azure Platform at Microsoft. Previously, he was Principal Engineer in the global infrastructure and strategy team at LinkedIn, the leader and architect for Facebook’s data center networking hardware and Senior Director of Engineering in the CTO office at Cisco.

The Linux Foundation provides an open governance model and a vendor neutral home to a variety of projects working to advance open hardware and data center innovation. This framework nurtures cross-project collaboration among Open19, DPDK, OpenBMC, and RISC-V projects; the LF Edge, OpenPower and Cloud Native Computing Foundations; and incubating projects such as bare metal provisioning engine Tinkerbell, among others. Formal collaborations are expected to be announced in the coming months.

“The Open19 Community has been doing crucial work to accelerate open source hardware design to meet the needs of modern data centers and the edge,” said Arpit Joshipura, General manager, Networking, Edge & IOT at The Linux Foundation. “We are excited to welcome Open19 as our growing community defines the next generation of digital infrastructure.”

Originally founded in 2016 by a community of cloud infrastructure innovators looking to solve the cost, efficiency and operational challenges of modern data center deployments, solutions based on Open19 technology are now deployed at leading global providers. Open19 provides specifications for servers, storage and networking components designed to fit in any 19-inch data center rack environment. The project features common elements to enable platform innovation: flexible server “bricks” (server nodes with standard power supply and network delivery, plus cooling); a mechanical cage to house bricks; a standardized power shelf and blind mate power and data connectors.

Driven by strong industry adoption, members are working now on the next-generation of the Open19 specification and invite others to get involved. It is expected to be available mid-year 2021. For more information, please visit: www.open19.org

About The Open19 Project

The Open19 project, as part of The Linux Foundation, designs and promotes a form factor specification that includes a brick cage, server brick form factor, power shelf and unique blind mate power and data connectors. These components allow service providers and enterprises to leverage the first data center form factor design for a cloud and edge-native world.

About The Linux Foundation

Founded in 2000, The Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. The Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contacts

Jennifer Cloer
for the Open19 Foundation and Linux Foundation
503-867-2304
jennifer@storychangesculture.com

Jennifer Lankford
for Equinix
503-308-2553
jennifer@lankfordpr.co

The post The Linux Foundation Hosts Open19 to Accelerate Data Center and Edge Hardware Innovation appeared first on Linux Foundation.

5 tips for deciding which Linux tasks and workloads to automate

It’s tough to know how to get started with automation, but here are five ideas to get you rolling.
Read More at Enable Sysadmin

Ping command basics for testing and troubleshooting

Have you ever stopped to look at how much more ping can do for you beyond just a quick network connectivity test?
Read More at Enable Sysadmin

In the trenches with Thomas Gleixner, real-time Linux kernel patch set

Jason Perlow, Editorial Director at the Linux Foundation interviews Thomas Gleixner, Linux Foundation Fellow, CTO of Linutronix GmbH, and project leader of the PREEMPT_RT real-time kernel patch set.

JP: Greetings, Thomas! It’s great to have you here this morning — although for you, it’s getting late in the afternoon in Germany. So PREEMPT_RT, the real-time patch set for the kernel is a fascinating project because it has some very important use-cases that most people who use Linux-based systems may not be aware of. First of all, can you tell me what “Real-Time” truly means? 

TG: Real-Time in the context of operating systems means that the operating system provides mechanisms to guarantee that the associated real-time task processes an event within a specified period of time. Real-Time is often confused with “really fast.” The late Prof. Doug Niehaus explained it this way: “Real-Time is not as fast as possible; it is as fast as specified.”

The specified time constraint is application-dependent. A control loop for a water treatment plant can have comparatively large time constraints measured in seconds or even minutes, while a robotics control loop has time constraints in the range of microseconds. But for both scenarios missing the deadline at which the computation has to be finished can result in malfunction. For some application scenarios, missing the deadline can have fatal consequences.

In the strict sense of Real-Time, the guarantee which is provided by the operating system must be verifiable, e.g., by mathematical proof of the worst-case execution time. In some application areas, especially those related to functional safety (aerospace, medical, automation, automotive, just to name a few), this is a mandatory requirement. But for other scenarios or scenarios where there is a separate mechanism for providing the safety requirements, the proof of correctness can be more relaxed. But even in the more relaxed case, the malfunction of a real-time system can cause substantial damage, which obviously wants to be avoided.

JP: What is the history behind the project? How did it get started?

TG: Real-Time Linux has a history that goes way beyond the actual PREEMPT_RT project.

Linux became a research vehicle very early on. Real-Time researchers set out to transform Linux into a Real-Time Operating system and followed different approaches with more or less success. Still, none of them seriously attempted a fully integrated and perhaps upstream-able variant. In 2004 various parties started an uncoordinated effort to get some key technologies into the Linux kernel on which they wanted to build proper Real-Time support. None of them was complete, and there was a lack of an overall concept. 

Ingo Molnar, working for RedHat, started to pick up pieces, reshape them and collect them in a patch series to build the grounds for the real-time preemption patch set PREEMPT_RT. At that time, I worked with the late Dr. Doug Niehaus to port a solution we had working based on the 2.4 Linux kernel forward to the 2.6 kernel. Our work was both conflicting and complimentary, so I teamed up with Ingo quickly to get this into a usable shape. Others like Steven Rostedt brought in ideas and experience from other Linux Real-Time research efforts. With a quickly forming loose team of interested developers, we were able to develop a halfway usable Real-Time solution that was fully integrated into the Linux kernel in a short period of time. That was far from a maintainable and production-ready solution. Still, we had laid the groundwork and proven that the concept of making the Linux Kernel real-time capable was feasible. The idea and intent of fully integrating this into the mainline Linux kernel over time were there from the very beginning.

JP: Why is it still a separate project from the Mainline kernel today?

TG: To integrate the real-time patches into the Linux kernel, a lot of preparatory work, restructuring, and consolidation of the mainline codebase had to be done first. While many pieces that emerged from the real-time work found their way into the mainline kernel rather quickly due to their isolation, the more intrusive changes that change the Linux kernel’s fundamental behavior needed (and still need) a lot of polishing and careful integration work. 

Naturally, this has to be coordinated with all the other ongoing efforts to adopt the Linux kernel to the different use cases ranging from tiny embedded systems to supercomputers. 

This also requires carefully designing the integration so it does not get in the way of other interests and imposes roadblocks for further developing the Linux kernel, which is something the community and especially Linus Torvalds, cares about deeply. 

As long as these remaining patches are out of the mainline kernel, this is not a problem because it does not put any burden or restriction on the mainline kernel. The responsibility is on the real-time project, but on the other side, in this context, there is no restriction to take shortcuts that would never be acceptable in the upstream kernel.

The real-time patches are fundamentally different from something like a device driver that sits at some corner of the source tree. A device driver does not cause any larger damage when it goes unmaintained and can be easily removed when it reaches the final state bit-rot. Conversely, the PREEMPT_RT core technology is in the heart of the Linux kernel. Long-term maintainability is key as any problem in that area will affect the Linux user universe as a whole. In contrast, a bit-rotted driver only affects the few people who have a device depending on it.

JP: Traditionally, when I think about RTOS, I think of legacy solutions based on closed systems. Why is it essential we have an open-source alternative to them? 

TG: The RTOS landscape is broad and, in many cases, very specialized. As I mentioned on the question of “what is real-time,” certain application scenarios require a fully validated RTOS, usually according to an application space-specific standard and often regulatory law. Aside from that, many RTOSes are limited to a specific class of CPU devices that fit into the targeted application space. Many of them come with specialized application programming interfaces which require special tooling and expertise.

The Real-Time Linux project never aimed at these narrow and specialized application spaces. It always was meant to be the solution for 99% of the use cases and to be able to fully leverage the flexibility and scalability of the Linux kernel and the broader FOSS ecosystem so that integrated solutions with mixed-criticality workloads can be handled consistently. 

Developing real-time applications on a real-time enabled Linux kernel is not much different from developing non-real-time applications on Linux, except for the careful selection of system interfaces that can be utilized and programming patterns that should be avoided, but that is true for real-time application programming in general independent of the RTOS. 

The important difference is that the tools and concepts are all the same, and integration into and utilizing the larger FOSS ecosystem comes for free.

The downside of PREEMPT_RT is that it can’t be fully validated, which excludes it from specific application spaces, but there are efforts underway, e.g., the LF ELISA project, to fill that gap. The reason behind this is, that large multiprocessor systems have become a commodity, and the need for more complex real-time systems in various application spaces, e.g., assisted / autonomous driving or robotics, requires a more flexible and scalable RTOS approach than what most of the specialized and validated RTOSes can provide. 

That’s a long way down the road. Still, there are solutions out there today which utilize external mechanisms to achieve the safety requirements in some of the application spaces while leveraging the full potential of a real-time enabled Linux kernel along with the broad offerings of the wider FOSS ecosystem.

JP: What are examples of products and systems that use the real-time patch set that people depend on regularly?

TG: It’s all over the place now. Industrial automation, control systems, robotics, medical devices, professional audio, automotive, rockets, and telecommunication, just to name a few prominent areas.

JP: Who are the major participants currently developing systems and toolsets with the real-time Linux kernel patch set?  

TG: Listing them all would be equivalent to reciting the “who’s who” in the industry. On the distribution side, there are offerings from, e.g., RedHat, SUSE, Mentor, and Wind River, which deliver RT to a broad range of customers in different application areas. There are firms like Concurrent, National Instruments, Boston Dynamics, SpaceX, and Tesla, just to name a few on the products side.

RedHat and National Instruments are also members of the LF collaborative Real-Time project.

JP: What are the challenges in developing a real-time subsystem or specialized kernel for Linux? Is it any different than how other projects are run for the kernel?

TG: Not really different; the same rules apply. Patches have to be posted, are reviewed, and discussed. The feedback is then incorporated. The loop starts over until everyone agrees on the solution, and the patches get merged into the relevant subsystem tree and finally end up in the mainline kernel.

But as I explained before, it needs a lot of care and effort and, often enough, a large amount of extra work to restructure existing code first to get a particular piece of the patches integrated. The result is providing the desired functionality but is at the same time not in the way of other interests or, ideally, provides a benefit for everyone.

The technology’s complexity that reaches into a broad range of the core kernel code is obviously challenging, especially combined with the mainline kernel’s rapid change rate. Even larger changes happening at the related core infrastructure level are not impacting ongoing development and integration work too much in areas like drivers or file systems. But any change on the core infrastructure can break a carefully thought-out integration of the real-time parts into that infrastructure and send us back to the drawing board for a while.

JP:  Which companies have been supporting the effort to get the PREEMPT_RT Linux kernel patches upstream? 

TG: For the past five years, it has been supported by the members of the LF real-time Linux project, currently ARM, BMW, CIP, ELISA, Intel, National Instruments, OSADL, RedHat, and Texas Instruments. CIP, ELISA, and OSADL are projects or organizations on their own which have member companies all over the industry. Former supporters include Google, IBM, and NXP.

I personally, my team and the broader Linux real-time community are extremely grateful for the support provided by these members. 

However, as with other key open source projects heavily used in critical infrastructure, funding always was and still is a difficult challenge. Even if the amount of money required to keep such low-level plumbing but essential functionality sustained is comparatively small, these projects struggle with finding enough sponsors and often lack long-term commitment.

The approach to funding these kinds of projects reminds me of the Mikado Game, which is popular in Europe, where the first player who picks up the stick and disturbs the pile often is the one who loses.

That’s puzzling to me, especially as many companies build key products depending on these technologies and seem to take the availability and sustainability for granted up to the point where such a project fails, or people stop working on it due to lack of funding. Such companies should seriously consider supporting the funding of the Real-Time project.

It’s a lot like the Jenga game, where everyone pulls out as many pieces as they can up until the point where it collapses. We cannot keep taking; we have to give back to these communities putting in the hard work for technologies that companies heavily rely on.

I gave up long ago trying to make sense of that, especially when looking at the insane amounts of money thrown at the over-hyped technology of the day. Even if critical for a large part of the industry, low-level infrastructure lacks the buzzword charm that attracts attention and makes headlines — but it still needs support.

JP:  One of the historical concerns was that Real-Time didn’t have a community associated with it; what has changed in the last five years?  

TG: There is a lively user community, and quite a bit of the activity comes from the LF project members. On the development side itself, we are slowly gaining more people who understand the intricacies of PREEMPT_RT and also people who look at it from other angles, e.g., analysis and instrumentation. Some fields could be improved, like documentation, but there is always something that can be improved.

JP:  What will the Real-Time Stable team be doing once the patches are accepted upstream?

TG: The stable team is currently overseeing the RT variants of the supported mainline stable versions. Once everything is integrated, this will dry out to some extent once the older versions reach EOL. But their expertise will still be required to keep real-time in shape in mainline and in the supported mainline stable kernels.

JP: So once the upstreaming activity is complete, what happens afterward?

TG: Once upstreaming is done, efforts have to be made to enable RT support for specific Linux features currently disabled on real-time enabled kernels. Also, for quite some time, there will be fallout when other things change in the kernel, and there has to be support for kernel developers who run into the constraints of RT, which they did not have to think about before. 

The latter is a crucial point for this effort. Because there needs to be a clear longer-term commitment that the people who are deeply familiar with the matter and the concepts are not going to vanish once the mainlining is done. We can’t leave everybody else with the task of wrapping their brains around it in desperation; there cannot be institutional knowledge loss with a system as critical as this. 

The lack of such a commitment would be a showstopper on the final step because we are now at the point where the notable changes are focused on the real-time only aspects rather than welcoming cleanups, improvements, and features of general value. This, in turn, circles back to the earlier question of funding and industry support — for this final step requires several years of commitment by companies using the real-time kernel.

There’s not going to be a shortage of things to work on. It’s not going to be as much as the current upstreaming effort, but as the kernel never stops changing, this will be interesting for a long time.

JP: Thank you, Thomas, for your time this morning. It’s been an illuminating discussion.

To get involved with the real-time kernel patch for Linux, please visit the PREEMPT_RT wiki at The Linux Foundation or email real-time-membership@linuxfoundation.org