Home Blog Page 68

Capture packets in Kubernetes with this open source tool

Troubleshoot complex network and application issues with ksniff, a kubectl plugin that captures packets in Kubernetes pods.

Read More at Enable Sysadmin

Capture packets in Kubernetes with this open source tool

Troubleshoot complex network and application issues with ksniff, a kubectl plugin that captures packets in Kubernetes pods.

Read More at Enable Sysadmin

The Linux Foundation and Google Cloud Launch Nephio to Enable and Simplify Cloud Native Automation of Telecom Network Functions

Nephio logo

New Open Source Project at the Linux Foundation brings Cloud, Telecom and Network functions providers together in a Kubernetes world 

San Francisco—April 12, 2022  Today, the Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced the formation of project Nephio in partnership with Google Cloud and leaders across the telecommunications industry. The Linux Foundation provides a venue for continued ecosystem, developer growth and diversity, as well as collaboration across the open source ecosystems.

Building, managing and deploying scalable 5G networks across multiple edge locations is complex. The Telco industry needs true cloud-native automation to be faster, simpler and easier, while achieving agility and optimization in cloud based deployments. To address these challenges, Google Cloud and the Linux Foundation have founded “Nephio.”  The project has support from several founding organizations including Service providers: Airtel, Bell Canada, Elisa, Equinix, Jio, Orange, Rakuten Mobile, TIM, TELUS, Vapor IO, Virgin Media O2, WINDTRE as well as Network Function, Service and Infrastructure Vendors: Aarna Networks, ARM, Casa-systems, DZS, Ericsson, F5, Intel, Juniper, Mavenir, Nokia, Parallel Wireless, VMware. 

Cloud Native Principles have come a long way and as we see Cloud Service Providers collaborating with Telecom Service Providers and Enterprises, a new way of simplifying automation of network functions is emerging. 

Nephio aims to deliver carrier-grade, simple, open, Kubernetes-based cloud native intent automation and common automation templates that materially simplify the deployment and management of multi-vendor cloud infrastructure and network functions across large scale edge deployments. 

Additionally, Nephio will enable faster onboarding of network functions to production including provisioning of underlying cloud infrastructure with a true cloud native approach, and reduce costs of adoption of cloud and network infrastructure.

Google Cloud

“Telecommunication companies are looking for new solutions for managing their cloud ready and cloud native infrastructures as well as their 5G networks to achieve the scale, efficiency, and high reliability needed to operate more cost effectively,” said Amol Phadke, managing director, Telecom Industry Products & Solutions, Google Cloud. “We look forward to working alongside The Linux Foundation, and our partners, in the creation of Nephio to set an industry open standard for Kubernetes-based intent automation that will result in faster and better connected cloud-native networks of the future.” 

Linux Foundation 

“Collaboration across Telecom and Cloud Service Providers is accelerating and we are excited to bring Nephio to the open source community,” said Arpit Joshipura, GM Networking, Edge & IOT, The Linux Foundation, “As end users demand end to end open source solutions, projects like Nephio complement the innovation across LFN, CNCF, LF Edge for faster deployment of telecom network functions in a cloud-native world.” 

More information about Nephio is available at www.nephio.org

Service Providers

Airtel

“Zero touch deployment, configuration and operations of network functions predominantly on the edge of the network and in multi-cloud and multi-vendor scenarios is a significant challenge for all operators across the globe. A cloud-native orchestration and automation approach is the absolute need of the hour. Airtel is looking forward to being part of the LF and Google initiative to develop innovative solutions to simplify network operations,” said Manish Gangey, SVP and Head – R&D, Bharti Airtel.

Bell

“Similar to our early participation in the Linux Foundation ONAP initiative, Bell Canada is thrilled to collaborate in this next chapter of Telco softwarization,” said Petri Lyytikainen, VP Network, Bell Canada. “With innovations like 5G, ORAN and a new era of distributed cloud computing, Nephio and its community will be key in accelerating network and infrastructure automation towards a true cloud-native and intent-driven approach. This important work will help drive the evolution of network technology that will benefit Bell customers and the telecoms industry in Canada for years to come.”

Elisa   

“Elisa has a long history of network automation and cloud services. That has been utilized by the leading network analytics and automation solution provider Elisa Polystar,” said Anssi Okkonen, CEO of Elisa Polystar. “We are looking forward to working together with Linux Foundation, Google Cloud and Nephio community to enable new cloud-native automation solutions for building the tools for self-driving networks.” 

Equinix

“We believe in innovation through collaboration and are pleased to join the Nephio project to help build advanced digital infrastructure orchestration capabilities for telco (5G) cloud native network functions,” said Justin Dustzadeh, CTO at Equinix. “We look forward to collaborating with the developer community and members of the Nephio project to make it easier for developers to manage distributed infrastructure and help businesses drive digital transformation.”

Jio

“Jio is excited to be part of the Nephio initiative. At a time when 5G Standalone deployments are rapidly coming on-stream globally, Nephio will play a pivotal role in the journey of telcos towards adopting a cloud native 5G Network,” said Aayush Bhatnagar, SVP, Jio. 

Orange

“For telecom operators, Cloud Native technologies will unleash many new opportunities. By providing a cloud native intent automation framework, Nephio should play a key role in the telecommunications ecosystem by enabling on-demand connectivity and zero touch operator capabilities, thus benefiting the entire industry, developers, vendors, integrators, operators,” said Laurent Leboucher, group CTO and SVP, Orange Innovation Networks.

Rakuten Mobile

“The telecommunications industry is undergoing transformative change, with cloud native technologies bringing the industry into the modern era. When building Rakuten Mobile’s cloud native network in Japan, we understood the challenges of an open ecosystem and also realized the many benefits of cloud architecture, including automation, zero-touch provisioning and unprecedented agility. We’re excited to join Nephio in working to reimagine what telecommunications can be in the cloud era,” commented Sharad Sriwastawa, CTO, Rakuten Mobile.

TIM

“We believe that the adoption of Cloud Native technology and philosophy will represent a cornerstone for the future of telecommunications, merging the world of cloud services and the world of telco services into one single digital platform. The automation framework is probably the most sensitive and strategic part of this platform that will be able to stimulate innovation during coming years,” said Crescenzo Micheli, VP Technology & Innovation at Telecom Italia (TIM). “We believe the Nephio project could play a fundamental role to speed up this process.” 

TELUS

“TELUS is excited to be contributing to this Linux Foundation project. Innovation and collaboration have been a life-long journey for us; accelerating the adoption of Cloud Native technologies is a must to meet our customers’ ever-changing expectations,” said Ibrahim Gedeon, CTO at TELUS. “We are excited to build on our 10-year strategic partnership with Google Cloud and collaborate with the Linux Foundation. Together we will maximize the scalability and agility of our global-leading network, simplifying and rethinking the operating digital models of our customers while building a better future for all Canadians and globally. This cannot be more true than with 5G and fiberizing the world as we enter a new era of hyper-connectivity. Combining high speeds, bandwidth and reliability with cloud computing and automation will transform the way we operate, enabling solutions like smart cities and connected cars and transforming key verticals across agriculture, healthcare and manufacturing.”

Vapor IO

“Nephio depends on critical underlying infrastructure like Vapor IO’s Kinetic Grid to automate the deployment of carrier-grade network functions,” said Cole Crawford, founder & CEO of Vapor IO. “Automating at-scale operations across multiple clouds is a complicated task. We applaud Google for selecting the Linux Foundation for bringing these capabilities to market via an open source platform. This could be a watershed moment in the telecom industry, transforming historically complicated network deployments and operations into cloud-native workflows with high degrees of automation. This will lower the cost of 5G deployments and increase the overall competitiveness of the telecom industry.”

Virgin Media O2

“We are continually looking at improving and evolving our automation strategies, especially around Kubernetes.  We are incredibly motivated to work closely with the Linux Foundation and Nephio toward network automation and the process of using software to automate network and security provisioning and management to maximize network efficiency and functionality continuously,” said Paul Greaves, head of Automation and Orchestration Virgin Media O2.

WINDTRE

“Cloudnative platforms are an essential offering for accelerating the enterprises’ digitization journey plans over the next few years. Nephio, the new automation model based on Kubernetes, is the step to support the evolution of 5G networks and the edge infrastructures for dynamic services. We are pleased to be part of the Nephio community,” said Massimo Motta, Architecture and governance director of WINDTRE.

Network Function, Service and Infrastructure Vendors

Aarna Networks

“We actively utilize and contribute back to Linux Foundation Networking projects to help customers simplify the orchestration, lifecycle management, and automated service assurance of 5G networks and edge computing applications,” said Amar Kapadia, co-founder and CEO, Aarna Networks. “Similarly, we look forward to collaborating on the Nephio project to simplify numerous platform, infrastructure, and network pain points of 5G and edge deployments.” 

Arm

​​“5G is expected to be the fastest-deployed mobile technology in history, but only if we can remove the barriers to efficient large-scale deployment. The founding of Nephio brings the benefits of cloud native technology to 5G networks, improving operational agility and reducing deployment costs so that we can economically meet the surge in connectivity demand,” said Eddie Ramirez, VP, Infrastructure Line of Business, Arm.

Casa Systems 

“Next-generation networks require the flexibility and agility of the cloud at the network edge. We are pleased to be working with the Linux Foundation, Google and the broader community of partners on the Nephio initiative to develop industry standards for cloud-native, Kubernetes-based automation and orchestration solutions that will enable tomorrow’s all-connected world,” said Gibson Ang, vice president of Technology and Product Management, Casa Systems

DZS

“As an advocate of open standards-based solutions for the network edge, DZS enthusiastically supports this joint initiative with the Linux Foundation and Google. We look forward to collaborating with global converged carrier customers of DZS and other ecosystem partners on the Nephio project as we usher in a new era of connectivity by addressing the industry demand for multi-domain, software-driven automation and orchestration across distributed cloud-native networks for 5G and beyond,” said Andrew Bender, CTO, DZS. 

Ericsson

“The openness and flexibility of the 5G cloud native architecture brings significant opportunities for CSPs to expand existing business as well as building new business for enterprise customers. For CSPs to scale the business, simplification and automation of lifecycle and workload management across hybrid and multi cloud environments is key,” said Anders Vestergren, head of strategy portfolio and technology, Business Area Digital Services, Ericsson. “We look forward to collaborating with other industry leaders as part of the Nephio project to enhance Kubernetes with an industry-standard automation framework for cloud native deployments.”

F5 

“F5 has been partnering with many service providers in their transformation journey building and operating cloud-native infrastructure for 5G, with special focus on scaling and securing telco protocols and workloads. We are excited to join the Linux Foundation and the Nephio project to help accelerate our customers’ digital initiatives,” said Ankur Singla, SVP, GM, Distributed Cloud Services, F5.

Intel 

“Innovation at the edge is the next frontier of business opportunity. Nephio is a ground-breaking step to provide Cloud Service Providers with a carrier-grade, open, and extensible Kubernetes-based cloud-native automation framework, and common automation templates that simplify large scale edge deployment. We are pleased to be working in collaboration with the Linux Foundation and broader Nephio community to help simplify edge automation,” said  Rajesh Gadiyar, VP and CTO, Network Platforms Group at Intel.

Juniper

“Kubernetes-centric automation, leveraging cloud native principles, is an integral part of Juniper Networks’ experience-first networking strategy. We are therefore excited to join the Nephio project at the Linux Foundation as a founding partner, continuing Juniper’s long-standing tradition as a major supporter of and active contributor to the open source community. We look forward to working with other leading technology companies and mobile operators, as well as the broader Kubernetes open source community, to ensure that Nephio helps to advance cloud native automation at scale, for the benefit of all.” Constantine Polychronopoulos, VP of 5G & Telco Cloud at Juniper Networks.

Mavenir

“Network automation is a key driver for Telco network cloudification. A Kubernetes native automation framework with proven success in other vertical applications automation is promising for the Telco space. We are pleased to be part of the Google/Linux  Foundation initiative to accelerate this move on the public cloud and look forward to collaborating with the Nephio community,” said Bejoy Pankajakshan, CTSO of Mavenir.

Nokia           

“Nokia has always led in the drive to deliver open cloud-based networks and services that usher new value and possibilities of customer experience that fuel revenue growth for everyone. Automation of deployment, configuration and operations of network functions, that work seamlessly in a complex multi-cloud and multi-vendor network environment, are key to achieving the above goals. Nokia is pleased to join its customers and partners in a collaboration to co-innovate on the ‘democratic’ building blocks for the right tools of tomorrow’s networks.” Jitin Bhandari, CTO, Cloud and Network Services, Nokia

Parallel Wireless     

Steve Papa, CEO, Parallel Wireless, said, “Parallel Wireless is cloudifying 2G 3G 4G and 5G Open RAN and the Google/Linux Foundation initiative cloud-native architecture will allow fast deployment of RAN services on site, fast and fault-proofed upgrades and scalability — where resources can be scaled in an instant based on the end-user needs. Parallel Wireless is proud to join this initiative to help mobile operators modernize their networks via cloudification and bring innovation and cost savings.”

VMware

Lakshmi Mandyam, vice president of product management and partner ecosystems, Service Provider & Edge, VMware, said, “CSPs are embracing multi-cloud to create revenue-accelerating services, reduce operational costs and simplify network operations.  VMware’s vision for CSPs enables a cloud-first approach to management and orchestration across the core, RAN and edge, aligning with the goals of the Linux Foundation and Nephio project. We look forward to contributing to this initiative that will foster a multi-vendor ecosystem and support faster on-boarding, automation and life-cycle management for cloud-native networks.”

About Nephio

Nephio’s goal is to deliver carrier-grade, simple, open, Kubernetes-based cloud-native intent automation and common automation templates that materially simplify the deployment and management of multi-vendor cloud infrastructure and network functions across large scale edge deployments. Nephio enables faster onboarding of network functions to production including provisioning of underlying cloud infrastructure with a true cloud native approach, and reduces costs of adoption of cloud and network infrastructure. More information can be found at www.nephio.org.

About the Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

#####

The post The Linux Foundation and Google Cloud Launch Nephio to Enable and Simplify Cloud Native Automation of Telecom Network Functions appeared first on Linux Foundation.

Secure Open Source 5G Gains Momentum as Community Focuses on Re-aggregation, with 5G Super-Blueprints and New Members 

LFN Community publishes white paper highlighting cybersecurity efforts Telecom, Cloud and Enterprise align with 5G Super Blue Print across ONAP, Anuket, EMCO, Magma, ORAN-SC and more projects as Enterprise eBFP project, L3AF, is inducted into LF NetworkingATOS, GenXComm, Keysight Technologies and Telaverge Communications join LFN as Silver members

SAN FRANCISCO, April 12, 2022LF Networking, which facilitates collaboration and operational excellence across open source networking projects, ​today announced continued momentum focused on re-aggregation, with updates to security, 5G blueprints, and the addition of four new Silver members: ATOS, GenXComm, Keysight Technologies, and Telaverge Communications. 

“As the LF Networking community rolls into its fourth year as an umbrella project organization, we are pleased to see robust efforts focused on securing 5G across multiple project & foundations as we welcome even more industry-leading organizations to the project,” said Arpit Joshipura, general manager, Networking, Edge and IoT, the Linux Foundation. “It’s the robust and diverse set of member companies that enable LFN’s collaborative innovation into the future of 5G and networking.”

5G Super Blue Print Ecosystem Expands

The community is making progress with the 5G Super Blueprint,  a community-driven integration/illustration of multiple open source initiatives, projects, and vendors coming together to show use cases demonstrating implementation architectures for end users. The 5G Super Blueprint is now integrated across even more projects––including Magma (1.6), EMCO, and Anuket––building open source components applicable to a variety of industry use cases. Preliminary scoping for future integrations with the O-RAN Software Community have begun, setting the stage for end-to-end open source interoperability from the core through the RAN and future compliance activities.

Meanwhile, the L3AF project has been inducted into the LF Networking umbrella, as membership expands further across the ecosystem with new Silver members. 

L3AF is an open source project, developed by Walmart, housing cutting-edge solutions in the realm of eBPF (a revolutionary technology that allows us to run sandboxed programs in an operating system kernel) that provides complete life-cycle management of eBPF programs with the help of an advanced control plane that has been written in Golang. The  control plane orchestrates and composes independent eBPF programs across the network infrastructure to solve crucial business problems. L3AF’s eBPF programs include load-balancing, rate limiting, traffic mirroring, flow exporter, packet manipulation, performance tuning, and many more.  L3AF joined the Linux Foundation in fall of 2021 and has now been inducted into the LF Networking project umbrella. 

New LFN Silver members include:

ATOS, a multi-vendor end-to-end system integrator in both IT and telecom network space; specialized in multi-cloud solutions, edge and MEC, 5G-enabled applications with an AI/ML  focus, cybersecurity, and decarbonization. GenXComm Inc.’s mission is to deliver limitless computing power, fast connectivity, and on-demand intelligence to every location on EarthKeysight  Technologies, Inc. is a leading technology company that delivers advanced design and validation solutions to help accelerate innovation to connect and secure the worldTelaverge Communications  is the leader in complete Network Test Automation Orchestration and Digital Transformation products (Regal for Containers and Cloud) designed for enterprises, operators and OEM’s.  Telaverge’s open source based private LTE and 5G cores are pre-integrated with Regal for zero touch testing and deployment.

A full list of LFN member organizations can be found here: https://www.lfnetworking.org/membership/members/ 

LFN Security White Paper

Highlighting its security efforts to help secure open source networking against cybersecurity attacks, the community published a white paper titled “Securing Open Source 5G from End to End” that is now available for download. 

“A unique advantage of developing software in the open is more eyes on the code;  when it comes to security, that translates to large groups of experts who can propose improvements and enhancements in a faster, more scalable fashion– and that is true for LFN,” said Amy Zwarico, vice chair of the ONAP Security subcommittee. “Community collaboration via security working groups and sub-committees to address secure software development practices, SBOMs, DDoS mitigation and other threats are just some of the steps LFN is taking to create code that can be trusted to run our networks.”

At a time when the United States White House has issued multiple Executive Orders to address cybersecurity and supply chain attacks, the LFN community continues to take steps to ensure open source networking is secure. The group is publishing a white paper to outline its security strategies, including the formation of security-focused committees and subcommittees; development and adoption of security Software Bill of Materials (SBOM); OpenSSF badging; and use of the LFX Platform’s Security Dashboard to enable developers to identify and resolve vulnerabilities quickly and easily; and more. Download the white paper for more information. 

Upcoming Events

The LF Networking developer community will host the LFN Developer & Testing Forum this Spring, taking place June 13-16, in Porto, Portugal. Registration for that event is open, with more details to come. 

Open Networking & Edge (ONE) Summit North America will take place November 15-16 in Seattle, Wash. The event will be followed by a two-day LFN Developer & Testing Forum (Nov 17-18) in the same venue. The Open Networking & Edge Summit is the industry’s premier open networking and edge computing event focused on end to end solutions powered by open source in the Telco, Cloud, and Enterprise verticals. Attendees will learn how to leverage open source ecosystems and gain new insights for digital transformation. More information will be available soon. 

Support from new members

“The mission of Atos is to support our customers throughout a multitude of industry sectors on their edge-to-cloud journey. We help telecom customers leverage cloud synergies between their IT and their network, and introduce new edge computing and 5G MEC services. We are excited about ONAP and other programs of the LFN, as they facilitate exactly these synergies in a growing market.”

“Keysight is pleased to join LF Networking as a silver member and contribute to an ecosystem with the common goal of advancing technology and innovation built on open source software and standards,” said Kalyan Sundhar, vice president of Edge-to-Core Networks at Keysight Technologies. “Keysight leverages open source standards for end-to-end network harmonization produced by the LF Networking community to enable this ecosystem to cost-effectively accelerate protocol and performance design validation.”

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 2,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit linuxfoundation.org.

The Linux Foundation Events are where the world’s leading technologists meet, collaborate, learn and network in order to advance innovations that support the world’s largest shared technologies.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds. 

###

The post Secure Open Source 5G Gains Momentum as Community Focuses on Re-aggregation, with 5G Super-Blueprints and New Members  appeared first on Linux Foundation.

5 underused Podman features to try now

Simplify how you interact with containers by incorporating pods, init containers, additional image stores, system reset, and play kube into your work.

Read More at Enable Sysadmin

5 underused Podman features to try now

Simplify how you interact with containers by incorporating pods, init containers, additional image stores, system reset, and play kube into your work.

Read More at Enable Sysadmin

Ksplice Known Exploit Detection for VM_IO Use After Free, BPF bounds checking, and more…

First in a regular series of blogs highl

Click to Read More at Oracle Linux Kernel Development

Classic SysAdmin: Linux 101: Updating Your System

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Many years ago, when I first began with Linux, installing applications and keeping a system up to date was not an easy feat. In fact, if you wanted to tackle either task you were bound for the command line. For some new users this left their machines outdated or without applications they needed. Of course, at the time, most everyone trying their hand at Linux knew they were getting into something that would require some work. That was simply the way it was. Fortunately times and Linux have changed. Now Linux is exponentially more user friendly – to the point where so much is automatic and point and click – that today’s Linux hardly resembles yesterday’s Linux.

But even though Linux has evolved into the user-friendly operating system it is, there are still some systems that are fundamentally different than their Windows counterparts. So it is always best to understand those systems in order to be able to properly use those system. Within the confines of this article you will learn how to keep your Linux system up to date. In the process you might also learn how to install an application or two.

There is one thing to understand about updating Linux: Not every distribution handles this process in the same fashion. In fact, some distributions are distinctly different down to the type of file types they use for package management.

Ubuntu and Debian use .deb
Fedora, SuSE, and Mandriva use .rpm
Slackware uses .tgz archives which contain pre-built binaries
And of course there is also installing from source or pre-compiled .bin or .package files.

As you can see there are number of possible systems (and the above list is not even close to being all-inclusive). So to make the task of covering this topic less epic, I will cover the Ubuntu and Fedora systems. I will touch on both the GUI as well as the command line tools for handling system updates.

Ubuntu Linux

Ubuntu Linux has become one of the most popular of all the Linux distributions. And through the process of updating a system, you should be able to tell exactly why this is the case. Ubuntu is very user friendly. Ubuntu uses two different tools for system update:

apt-get: Command line tool.
Update Manager: GUI tool.

The Update Manger is a nearly 100% automatic tool. With this tool you will not have to routinely check to see if there are updates available. Instead you will know updates are available because the Update Manager will open on your desktop (see Figure 1) as soon as the updates depending upon their type:

Security updates: Daily
Non-security updates: Weekly

If you want to manually check for updates, you can do this by clicking the Administration sub-menu of the System menu and then selecting the Update Manager entry. When the Update Manager opens click the Check button to see if there are updates available.

Figure 1 shows a listing of updates for a Ubuntu 9.10 installation. As you can see there are both Important Security Updates as well as Recommended Update. If you want to get information about a particular update you can select the update and then click on the Description of update dropdown.
In order to update the packages follow these steps:

Check the updates you want to install. By default all updates are selected.
Click the Install Updates button.
Enter your user (sudo) password.
Click OK.

The updates will proceed and you can continue on with  your work. Now some updates may require either you to log out of your desktop and log back in, or to reboot the machine. There are is a new tool in development (Ksplice) that allow even the update of a kernel to not require a reboot.
Once all of the updates are complete the Update Manage main window will return reporting that Your system is up to date.

Now let’s take a look at the command line tools for updating your system. The Ubuntu package management system is called apt. Apt is a very powerful tool that can completely manage your systems packages via command line. Using the command line tool has one drawback – in order to check to see if you have updates, you have to run it manually. Let’s take a look at how to update your system with the help of Apt. Follow these steps:

Open up a terminal window.
Issue the command sudo apt-get upgrade.
Enter your user’s password.
Look over the list of available updates (see Figure 2) and decide if you want to go through with the entire upgrade.
To accept all updates click the ‘y’ key (no quotes) and hit Enter.
Watch as the update happens.

That’s it. Your system is now up to date. Let’s take a look at how the same process happens on Fedora (Fedora 12 to be exact).

Fedora Linux

Fedora is a direct descendant of Red Hat Linux, so it is the beneficiary of the Red Hat Package Management system (rpm). Like Ubuntu, Fedora can be upgraded by:

yum: Command line tool.
GNOME (or KDE) PackageKit: GUI tool.

Depending upon your desktop, you will either use the GNOME or the KDE front-end for PackageKit. In order to open up this tool you simply go to the Administration sub-menu of the System menu and select the Software Update entry.  When the tool opens (see Figure 3) you will see the list of updates. To get information about a particular update all you need to do is to select a specific package and the information will be displayed in the bottom pane.

To go ahead with the update click the Install Updates button. As the process happens a progress bar will indicate where GNOME (or KDE) PackageKit is in the steps. The steps are:

Resolving dependencies.
Downloading packages.
Testing changes.
Installing updates.

When the process is complete, GNOME (or KDE) PackageKit will report that your system is update. Click the OK button when prompted.

Now let’s take a look at upgrading Fedora via the command line. As stated earlier, this is done with the help of the yum command. In order to take care of this, follow these steps:

Open up a terminal window (Do this by going to the System Tools sub-menu of the Applications menu and select Terminal).
Enter the su command to change to the super user.
Type your super user password and hit Enter.
Issue the command yum update and yum will check to see what packages are available for update.
Look through the listing of updates (see Figure 4).
If you want to go through with the update enter ‘y’ (no quotes) and hit Enter.
Sit back and watch the updates happen.
Exit out of the root user command prompt by typing “exit” (no quotes) and hitting Enter.
Close the terminal when complete.

Your Fedora system is now up to date.

Final Thoughts

Granted only two distributions were touched on here, but this should illustrate how easily a Linux installation is updated. Although the tools might not be universal, the concepts are. Whether you are using Ubuntu, OpenSuSE, Slackware, Fedora, Mandriva, or anything in-between, the above illustrations should help you through updating just about any Linux distribution. And hopefully this tutorial helps to show you just how user-friendly the Linux operating system has become.

Ready to continue your Linux journey? Check out our free intro to Linux course!

The post Classic SysAdmin: Linux 101: Updating Your System appeared first on Linux Foundation.

Looking Ahead: The CNF Certification Program

Here at The Linux Foundation’s blog, we share content from our projects, such as this article by Joel Hans from the Cloud Native Computing Foundation’s blog

The telecommunications industry is the backbone of today’s increasingly-digital economies, but it faces a difficult new challenge in evolving to meet modern infrastructure practices. How did telecommunications get itself into this situation? Because the risks of incidents or downtime are so severe, the industry has focused almost exclusively on system designs that minimize risk and maximize reliability. That’s fantastic for mission-critical services, whether public air traffic control or private high-speed banking, but it emphasizes stability over productivity and the adoption of new technologies that might make their operations more resilient and performant.

Telecommunications is playing catch-up on cloud native technology, and the downstream effects are starting to show. These organizations are now behind the times on the de facto choices for enterprise and IT, which means they’re less likely to recruit the top-tier engineering talent they need. In increasingly competitive landscapes, they need to escalate productivity and deploy new telephony platforms to market faster, not get quagmired in old custom solutions built in-house.

To make that leap from internally-trusted to industry-trusted tooling, telecommunications organizations need confidence that they’re on track to properly evolve their virtual network function (VNF) infrastructure to enable cloud native functions using Kubernetes. That’s where CNCF aims to help.

Enter the CNF Test Suite for telecommunications

A cloud native network function (CNF) is an application that implements or facilitates network functionality in a cloud native way, developed using standardized principles and consisting of at least one microservice.

And the CNF Test Suite (cncf/cnf-testsuite) is an open source test suite for telcos to know exactly how cloud native their CNFs are. It’s designed for telecommunications developers and network operators, building with Kubernetes and other cloud native technology, to validate how well they’re following cloud native principles and best practices, like immutable infrastructure, declarative APIs, and a “repeatable deployment process.”

The CNCF is bringing together the Telecom User Group (TUG) and the Cloud Native Network Function Working Group (CNF WG) to implement the CNF Test Suite, which helps telco developers and ops teams build faster feedback loops thanks to the suite’s flexible testing and optimized execution time. Because it can be integrated into any CI/CD pipeline, whether in development or pre-production checks, or run as a standalone test for a single CNF, telecommunications development teams get at-a-glance understanding of how their new deployments align with the cloud native ecosystem, including CNCF-hosted projects, technologies, and concepts.

It’s a powerful answer to a difficult question: How cloud native are we?

The CNF Test Suite leverages 10 CNCF-hosted projects and several open source tools. A modified version of CoreDNS is used as an example CNF for end users to get familiar with the test suite in five steps, and Prometheus is utilized in an observability test to check the best practice for CNFs to actively expose metrics. And it packages other upstream tools, like OPA GatekeeperHelm linter, and Promtool to make installation, configuration, and versioning repeatable. The CNF Test Suite team is also grateful to contributions from Kyverno on security tests, LitmusChaos for resilience tests, and Kubescope for security policies.

The minimal install for the CNF Test Suite requires only a running Kubernetes cluster, kubectl, curl, and helm, and even supports running CNF tests on air-gapped machines or those who might need to self-host the image repositories. Once installed, you can use an example CNF or bring your own—all you need is to supply the .yml file and run `cnf-testsuite all` to run all the available tests. There’s even a quick five-step process for deploying the suite and getting recommendations in less than 15 minutes.

What the CNF Test Suite covers and why

At the start of 2022, the CNF Test Suite can run approximately 60 workload tests, which are segmented into 7 different categories.

Compatibility, Installability & Upgradability: CNFs should work with any Certified Kubernetes product and any CNI-compatible network that meet their functionality requirements while using standard, in-band deployment tools such as Helm (version 3) charts. The CNF Test Suite checks whether the CNF can be horizontally and vertically scaled using `kubectl` to ensure it can leverage Kubernetes’ built-in functionality.

Microservice: The CNF should be developed and delivered as a microservice for improved agility, or the development time required between deployments. Agile organizations can deploy new features more frequently or allow multiple teams to safely deploy patches based on their functional area, like fixing security vulnerabilities, without having to sync with other teams first.

State: A cloud native infrastructure should be immutable, environmentally-agnostic, and resilient to node failure, which means properly managing configuration, persistent data, and state. A CNF’s configuration should be stateless, stored in a custom resource definition or a separate database over local storage, with any persistent data managed by StatefulSets. Separate stateful and stateless information makes for infrastructure that’s easily reproduced, consistent, disposable, and always deployed in a repeatable way.

Reliability, Resilience & Availability: Reliability in telco infrastructure is the same as standard IT—it needs to be highly secure and reliable and support ultra-low latencies. Cloud native best practices try to reduce mean time between failure (MTBF) by relying on redundant subcomponents with higher serviceability (mean time to recover (MTTR)), and then testing those assumptions through chaos engineering and self-healing configurations. The Test Suite uses a type of chaos testing to ensure CNFs are resilient to the inevitable failures of public cloud environments or issues on an orchestrator level, such as what happens when pods are unexpectedly deleted or run out of computing resources. These tests ensure CNFs meet the telco industry’s standards for reliability on non-carrier-grade shared cloud hardware/software platforms.

Observability & Diagnostics: Each piece of production cloud native infrastructure must make its internal states observable through metrics, tracing, and logging. The CNF Test suite looks for compatibility with FluentdJaegerPromtoolPrometheus, and OpenMetrics, which help DevOps or SRE teams maintain, debug, and gather insights about the health of their production environments, which must be versioned, maintained in source control, and altered only through deployment pipelines.

Security: Cloud native security requires attention from experts at the operating system, container runtime, orchestration, application, and cloud platform levels. While many of these fall outside the scope of the CNF Test Suite, it still validates whether containers are isolated from one another and the host, do not allow privilege escalation, have defined resource limits, and are verified against common CVEs.

Configuration: Teams should manage a CNF’s configuration in a declarative manner—using ConfigMaps, Operators, or other declarative interfaces—to design the desired outcome, not how to achieve said outcome. Declarative configuration doesn’t have to be executed to be understood, making it far less prone to error than imperative configuration or even the most well-maintained sequences of `kubectl` commands.

After deploying numerous tests in each category, the CNF Test Suite outputs flexible scoring and suggestions for remediation for each category (or one category if you chose that in the CLI), giving you practical next steps on improving your CNF to better follow cloud native best practices. It’s a powerful—and still growing—solution for the telecommunications industry to embrace the cloud native in a way that’s controllable, observable, and validated by all the expertise under the CNCF umbrella.

What’s next for the CNF Test Suite?

The Test Suite initiative will continue to work closely with the Telecom User Group (TUG) and the Cloud Native Network Function Working Group (CNF WG), collecting feedback based on real-world use cases and evolving the project. As the CNF WG publishes more recommended practices for cloud native telcos, the CNF Test Suite team will add more tests to validate each.

In fact, v0.26.0, released on February 25, 2022, includes six new workload tests, bug fixes, and improved documentation around platform tests. If you’d like to get involved and shape the future of the CNF Test Suite, there are already several ways to provide feedback or contribute code, documentation, or example CNFs:

Visit the CNF Test Suite on GitHub
Continue the conversation on Slack (#cnf-testsuite-dev)
Attend CNF Test Suite Contributor calls on Thursdays at 15:15 UTC
Join the CNF Working Group meetings on Mondays at 16:00 UTC

Looking ahead: The CNF Certification Program

The CNF Test Suite is just the first exciting step in the upcoming Cloud Native Network Function (CNF) Certification Program. We’re looking forward to making the CNF Test Suite the de facto tool for network equipment providers and CNF development teams to prove—and then certify—that they’re adopting cloud native best practices in new products and services.

The wins for the telecommunications industry are clear:

Providers get verification that their cloud native applications and architectures adhere to cloud native best practices.
Their customers get verification that the cloud native services or networks they’re procuring are actually cloud native.

And they both get even better reliability, reduced risk, and lowered capital/operating costs.

We’re planning on supporting any product that runs in a certified Kubernetes environment to make sure organizations build CNFs that are compatible with any major public cloud providers or on-premises environments. We haven’t yet published the certification requirements, but they will be similar to the k8s-conformance process, where you can submit results via pull request and receive updates on your certification process over email.

As the CNF Certification Program develops, both the TUG and CNF-WG will engage with organizations that use the Test Suite heavily to make improvements and stay up-to-date on the latest cloud native best practices. We’re excited to see how the telecommunications industry evolves by adopting more cloud native principles, like loosely-coupled systems and immutability, and gathering proof of their hard work via the CNF Test Suite. That’s how we ensure a complex and essential industry makes the right next steps away toward the best technology infrastructure has to offer—without sacrificing an inch on reliability.

To take the next steps with the CNF Test Suite and prepare your organization for the upcoming CNF Certification Program, schedule a personalized CNF Test Suite demo or attend Cloud Native Telco Day, a co-located Event at KubeCon + CloudNativeCon Europe 2022 on May 16, 2022.

The post Looking Ahead: The CNF Certification Program appeared first on Linux Foundation.

Hacking the Linux Kernel in Ada – Part 3

For this three-part series, we implemented a ‘pedal to the metal’ GPIO driven, flashing of a LED, in the context of a Linux kernel module for the NVIDIA Jetson Nano development board (kernel-based v4.9.294, arm64) in my favorite programming language … Ada!

You can find the whole project published at https://github.com/ohenley/adacore_jetson. It is known to build and run properly. All instructions to be up and running in 5 minutes are included in the accompanying front-facing README.md. Do not hesitate to fill a GitHub issue if you find any problem.

Disclaimer: This text is meant to appeal to both Ada and non-Ada coders. Therefore I try to strike a balance between code story simplicity, didactic tractability, and features density. As I said to a colleague, this is the text I would have liked to cross before starting this experiment.

Binding 101

The binding thickness

Our code boundary to the Linux kernel C methods lies in kernel.ads. For an optional “adaptation” opportunity, kernel.adb exists before breaking into the concrete C binding. Take printk (printf equivalent in kernel space) for example. In C, you would call printk(“hello\n”). Ada strings are not null-terminated, they are an array of characters. To make sure the passed Ada string stays valid on the C side, you expose specification signatures .ads that make sense when programming from an Ada point of view and “adapt” in body implementation .adb before calling directly into the binding. Strictly speaking, our exposed Ada Printk would qualify as a “thick” binding even though the adaptation layer is minimal. This is in opposition to a “thin” binding which is really a one-to-one mapping on the C signature as implemented by Printk_C.

-- kernel.ads
procedure Printk (S : String); -- only this is visible for clients of kernel

-- kernel.adb
procedure Printk_C (S : String) with -- considered a thin binding
    Import        => true,
    Convention    => C,
    External_Name => "printk";

procedure Printk (S : String) is -- considered a thick binding
begin
   Printk_C (S & Ascii.Lf & Ascii.Nul); -- because we ‘mangle’ for Ada comfort
end;

The wrapper function

Binding to a wrapped C macro or static inline is often convenient, potentially makes you inherit fixes, upgrades happening inside/under the macro implementation and are, depending on the context, potentially more portable. create_singlethread_workqueue used in printk_wq.c as found in Part 1 makes a perfect example. Our driver has a C home in main.c. You create a C wrapping function calling the macro.

/* main.c */
extern struct workqueue_struct * wrap_create_singlethread_wq (const char* name)
{
   return create_singlethread_workqueue(name); /* calling the macro */
}

You then bind to this wrapper on the Ada side and use it. Done.

-- kernel.ads
function Create_Singlethread_Wq (Name : String) return Workqueue_Struct_Access with
   Import        => True,
   Convention    => C,
   External_Name => "wrap_create_singlethread_wq";

-- flash_led.adb
...
Wq := K.Create_Singlethread_Wq ("flash_led_work");

The reconstruction

Sometimes a macro called on the C side creates stuff, in place, which you end up needing on the Ada side. You can probably always bind to this resource but I find it often impedes code story. Take DECLARE_DELAYED_WORK(dw, delayed_work_cb) for example. From an outside point of view, it implicitly creates struct delayed_work dw in place.

/* https://elixir.bootlin.com/linux/v4.9.294/source/include/linux/workqueue.h */
#define DECLARE_DELAYED_WORK(n, f)					\
	struct delayed_work n = __DELAYED_WORK_INITIALIZER(n, f, 0)

Using this macro, the only way I found to get a hold of dw from Ada without crashing (returning dw from a wrapper never worked) was to globally call DECLARE_DELAYED_WORK(n, f) in main.c and then bind only to dw. Having to maintain this from C, making it magically appear in Ada felt “breadboard wiring” to me. In the code repository, you will find that we fully reconstructed this macro under the procedure of the same name Declare_Delayed_Work.

The pointer shortcut

Most published Ada to C bindings implement full definition parity. This is an ideal situation in most cases but it also comes with complexity, may generate many 3rd party files, sometimes buried deep, out-of-sync definitions, etc. What can you do when complete bindings are missing or you just want to move lean and fast? If you are making a prototype, you want minimal dependencies, the binding part is peripheral eg. you may only need a quick native window API. You get the point.

Depending on the context you do not always need the full type definitions to get going. Anytime you are strictly dealing with a handle pointer (not owning the memory), you can take a shortcut. Let’s bind to gpio_get_value to illustrate. Again, I follow and layout all C signatures found in the kernel sources leading to concrete stuff, where we can bind.




/* https://elixir.bootlin.com/linux/v4.9.294/source(-) */
/* (+)include/linux/gpio.h */
static inline int gpio_get_value(unsigned int gpio)
{
	return __gpio_get_value(gpio);
}

/* (+)include/asm-generic/gpio.h */
static inline int __gpio_get_value(unsigned gpio)
{
	return gpiod_get_raw_value(gpio_to_desc(gpio));
}
/* (+)include/linux/gpio/consumer.h */
struct gpio_desc *gpio_to_desc(unsigned gpio);            /* bindable */

int gpiod_get_raw_value(const struct gpio_desc *desc);    /* bindable */

/* (+)drivers/gpio/gpiolib.h */
struct gpio_desc {
	struct gpio_device	*gdev;
	unsigned long		flags;
...
	const char		*name;
};

Inspecting the C definitions we find that gpiod_get_raw_value and gpio_to_desc are our available functions for binding. We note gpio_to_desc uses a transient pointer of type gpio_desc *. Because we do not touch or own a full gpio_desc instance we can happily skip defining it in full (and any dependent leads eg. gpio_device).

By declaring type Gpio_Desc_Acc is new System.Address; we create an equivalent to gpio_desc *. After all, a C pointer is a named system address. We now have everything we need to build our Ada version of gpio_get_value.

-- kernel.ads
package Ic renames Interfaces.C;

function Gpio_Get_Value (Gpio : Ic.Unsigned) return Ic.Int; -- only this is visible for clients of kernel

-- kernel.adb
type Gpio_Desc_Acc is new System.Address; -- shortcut

function Gpio_To_Desc_C (Gpio : Ic.Unsigned) return Gpio_Desc_Acc with
   Import        => True,
   Convention    => C,
   External_Name => "gpio_to_desc";
 
function Gpiod_Get_Raw_Value_C (Desc : Gpio_Desc_Acc) return Ic.Int with
   Import        => True,
   Convention    => C,
   External_Name => "gpiod_get_raw_value";

function Gpio_Get_Value (Gpio : Ic.Unsigned) return Ic.Int is
   Desc : Gpio_Desc_Acc := Gpio_To_Desc_C (Gpio);
begin
   return Gpiod_Get_Raw_Value_C (Desc);
end;

The Raw bindings, “100% Ada”

In most production contexts we cannot recommend reconstructing unbindable kernel API calls in Ada. Wrapping the C macro or static inline is definitely easier, safer, portable and maintainable. The following goes full blown Ada for the sake of illustrating some interesting nuts and bolts and to show that it is always possible. 

Flags, first take

Given the will power you can always reconstruct the targeted macro or static inline in Ada. Let’s come back to create_singlethread_workqueue. If you take the time to expand its macro using GCC this is what you get.

$ gcc -E [~ 80_switches_for_valid_ko] printk_wq.c 
...
wq = __alloc_workqueue_key(("%s"),
                          (WQ_UNBOUND |
                           __WQ_ORDERED |
                           __WQ_ORDERED_EXPLICIT |
                          (__WQ_LEGACY | WQ_MEM_RECLAIM)),
                          (1),
                          ((void *)0),
                          ((void *)0),
                          "my_wq");

All arguments are straightforward to map except the OR‘ed flags. Let’s search the kernel sources for those flags.

/* https://elixir.bootlin.com/linux/v4.9.294/source/include/linux/workqueue.h */
enum {
   WQ_UNBOUND             = 1 << 1,
   ...
   WQ_POWER_EFFICIENT     = 1 << 7,

   __WQ_DRAINING          = 1 << 16,
   ...
   __WQ_ORDERED_EXPLICIT  = 1 << 19,

   WQ_MAX_ACTIVE          = 512,     
   WQ_MAX_UNBOUND_PER_CPU = 4,      
   WQ_DFL_ACTIVE          = WQ_MAX_ACTIVE / 2,
};

Here are our design decisions for reconstruction

  • WQ_MAX_ACTIVE, WQ_MAX_UNBOUND_PER_CPU, WQ_DFL_ACTIVE are constants, not flags, so we keep them out.
  • The enum is anonymous, let’s give it a proper named type.
  • __WQ pattern is probably a convention but at the same times usage is mixed, eg. WQ_UNBOUND | __WQ_ORDERED, so let’s flatten all this.

Because we do not use these flags elsewhere in our code base, the occasion is perfect to show that in Ada we can keep all this modeling local to our unique function using it.

-- kernel.ads
package Ic renames Interfaces.C;

type Wq_Struct_Access is new System.Address;      -- shortcut
type Lock_Class_Key_Access is new System.Address; -- shortcut
Null_Lock : Lock_Class_Key_Access := 
Lock_Class_Key_Access (System.Null_Address); -- typed ((void *)0) equiv.

-- kernel.adb
type Bool is (NO, YES) with Size => 1;       -- enum holding on 1 bit
for Bool use (NO => 0, YES => 1);            -- "represented" by 0, 1 too

function Alloc_Workqueue_Key_C ...
   External_Name => "__alloc_workqueue_key";      -- thin binding

function Create_Singlethread_Wq (Name : String) return Wq_Struct_Access is
   type Workqueue_Flags is record
      ...
      WQ_POWER_EFFICIENT  : Bool;
      WQ_DRAINING         : Bool;
      ...
   end record with Size => Ic.Unsigned'Size;
   for Workqueue_Flags use record
      ...
      WQ_POWER_EFFICIENT  at 0 range  7 ..  7;
      WQ_DRAINING         at 0 range 16 .. 16;
      ...
   end record;
   Flags : Workqueue_Flags := (WQ_UNBOUND          => YES,
                               WQ_ORDERED          => YES,
                               WQ_ORDERED_EXPLICIT => YES,
                               WQ_LEGACY           => YES,
                               WQ_MEM_RECLAIM      => YES,
                               Others              => NO);
   Wq_Flags : Ic.Unsigned with Address => Flags'Address;
begin
   return Alloc_Workqueue_Key_C ("%s", Wq_Flags, 1, Null_Lock, "", Name);
end;
  • In C, each flag is implicitly encoded as an integer literal, bit swapped by an amount. Because __alloc_workqueue_key signature expects flags encoded as an unsigned int It should be reasonable to use Ic.Unsigned’Size, to hold a Workqueue_Flags.
  • We build the representation of Workqueue_Flags type similar to what we learned in Part 2 to model registers. Compared to the C version we now have NO => 0, YES => 1 semantic and no need for bitwise operations.
  • Remember, in Ada we roll with strong user-defined types for the greater goods. Therefore something like Workqueue_Flags does not match the expected Flags : Ic.Unsigned parameter of our __alloc_workqueue_key thin binding. What should we do? You create a variable Wq_Flags : Ic.Unsigned and overlay it the address of Flags : Workqueue_Flags which you can now pass in to __alloc_workqueue_key.
Wq_Flags : Ic.Unsigned with Address => Flags'Address; -- voila!

Ioremap and iowrite32

The core work of the raw_io version happens in Set_Gpio. Using Ioremap, we retrieve the kernel mapped IO memory location for the GPIO_OUT register physical address. We then write the content of our Gpio_Control to this IO memory location through Io_Write_32.

-- kernel.ads
type Iomem_Access is new System.Address;

-- led.adb
package K renames Kernel;
package C renames Controllers;

procedure Set_Gpio (Pin : C.Pin; S : Led.State) is

   function Bit (S : Led.State) return C.Bit renames Led.State'Enum_Rep;

   Base_Addr : K.Iomem_Access;
   Control   : C.Gpio_Control := (Bits  => (others => 0), 
                                  Locks => (others => 0));
   Control_C : K.U32 with Address => Control'Address;
begin
   ...
   Control.Bits (Pin.Reg_Bit) := Bit (S); -- set the GPIO flags
   ...
   Base_Addr := Ioremap (C.Get_Register_Phys_Address (Pin.Port, C.GPIO_OUT),
                         Control_C'Size); -- get kernel mapped register addr.
   K.Io_Write_32 (Control_C, Base_Addr);  -- write our GPIO flags to this addr.
   ...
end;

Let’s take the hard paths of full reconstruction to illustrate interesting stuff. We first implement ioremap. On the C side we find

/* https://elixir.bootlin.com/linux/v4.9.294/source(-) */
/* (+)arch/arm64/include/asm/io.h */
#define ioremap(addr, size) \
   __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE))

extern void __iomem *__ioremap(phys_addr_t phys_addr, size_t size, pgprot_t prot);                       

Flags, second take

Here we are both lucky and unlucky. __ioremap is low hanging while __pgprot(PROT_DEVICE_nGnRE) turns out to be a rabbit hole. I skip the intermediate expansion by reporting the final result

$ gcc -E [~ 80_switches_for_valid_ko] test_using_ioremap.c
…
void* membase = __ioremap(  
   (phys_addr + offset),
   (4),
   ((pgprot_t) {
      (((((((pteval_t)(3)) << 0) |
      (((pteval_t)(1)) << 10) |
      (((pteval_t)(3)) << 8)) |
      (arm64_kernel_unmapped_at_el0() ? (((pteval_t)(1)) << 11) : 0)) |
      (((pteval_t)(1)) << 53) |
      (((pteval_t)(1)) << 54) |
      (((pteval_t)(1)) << 55) |
      ((((pteval_t)(1)) << 51)) |
      (((pteval_t)((1))) << 2)))
   }))

Searching for definitions in the kernel sources: (meaningful sampling only)

/* https://elixir.bootlin.com/linux/v4.9.294/source(-) */
/* (+)arch/arm64/include/asm/pgtable-hwdef.h */
#define PTE_TYPE_MASK       (_AT(pteval_t, 3) << 0)
...
#define PTE_NG		    (_AT(pteval_t, 1) << 11) 
...
#define PTE_ATTRINDX(t)     (_AT(pteval_t, (t)) << 2)    

/* (+)arch/arm64/include/asm/mmu.h */
static inline bool arm64_kernel_unmapped_at_el0(void)   
{
   return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&
   cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
}

/* (+)arch/arm64/include/asm/pgtable-prot.h */
#define PTE_DIRTY           (_AT(pteval_t, 1) << 55)    

/* (+)arch/arm64/include/asm/memory.h */
#define MT_DEVICE_nGnRE     1                           

The macro pattern _AT(pteval_t, x) can be cleared right away. IIUC, it serves to handle calling both from assembly and C. When you are concerned by the C case, like we do, it boils down to x, eg. ((pteval_t)(1)) << 10) becomes 1 << 10.

arm64_kernel_unmapped_at_el0 is in part ‘kernel configuration dependant’, defaulting to ‘yes’, so let’s simplify our job and bring it in, PTE_NG which is the choice ? (((pteval_t)(1)) << 11), for all cases.

(((pteval_t)((1))) << 2))) turns out to be PTE_ATTRINDX(t) with MT_DEVICE_nGnRE as input. Inspecting the kernel sources, there are four other values intended as input to PTE_ATTRINDX(t). PTE_ATTRINDX behaves like a function so let implement it as such.

type Pgprot_T is mod 2**64; -- type will hold on 64 bits 

type Memory_T is range 0 .. 5;
MT_DEVICE_NGnRnE : constant Memory_T := 0;
MT_DEVICE_NGnRE  : constant Memory_T := 1;
...
MT_NORMAL_WT     : constant Memory_T := 5;

function PTE_ATTRINDX (Mt : Memory_T) return Pgprot_T is
   (Pgprot_T(Mt * 2#1#e+2)); -- base # based_integer # exponent

Here I want to show another way to replicate C behavior, this time using bitwise operations. Something like PTE_TYPE_MASK value ((pteval_t)(3)) << 0 cannot be approached like we did before. 3 takes two bits and is somewhat a magic number. What we can do is improve on the representation. We are doing bit masks so why not express using binary numbers directly. It even makes sense graphically.

PTE_VALID      : Pgprot_T := 2#1#e+0;
...
PTE_TYPE_MASK  : Pgprot_T := 2#1#e+0 + 2#1#e+1; -- our famous 3
...
PTE_HYP_XN     : Pgprot_T := 2#1#e+54;
-- kernel.ads
type Phys_Addr_T is new System.Address;
type Iomem_Access is new System.Address;

-- kernel.adb
function Ioremap (Phys_Addr : Phys_Addr_T; 
                  Size      : Ic.Size_T) return Iomem_Access is
...         
   Pgprot : Pgprot_T := (PTE_TYPE_MASK or
                         PTE_AF        or
                         PTE_SHARED    or
                         PTE_NG        or
                         PTE_PXN       or
                         PTE_UXN       or
                         PTE_DIRTY     or
                         PTE_DBM       or
                         PTE_ATTRINDX (MT_DEVICE_NGnRE));
begin
   return Ioremap_C (Phys_Addr, Size, Pgprot);
end;

So what is interesting here?

  • Ada is flexible. The original Pgprot_T values arrangement did not allow record mapping like we previously did for type Workqueue_Flags. We adapted by replicating the C implementation, OR‘ing all values to create a final mask.
  • Everything has been tidied up by strong typing. We are now stuck with disciplined stuff.
  • Representation is explicit, expressed in the intended base.
  • Once again this typing machinery lives at the most restrictive scope, inside the Ioremap function. Because Ada “scoping” has few special rules, refactoring up/out of scopes usually boils down to a simple blocks swapping game.

Emitting assembly

Now we give a look at ioread32 and iowrite32. Turns out those are, again, a cascade of static inline and macros ending up directly emitting GCC assembly directives (detailing only iowrite32).

/* https://elixir.bootlin.com/linux/v4.9.294/source(-) */
/* (+)include/asm-generic/io.h */
static inline void iowrite32(u32 value, volatile void __iomem *addr)
{
   writel(value, addr);
}
/* (+)include/asm/io.h */
#define writel(v,c)     ({ __iowmb(); writel_relaxed((v),(c)); })
#define __iowmb()       wmb()    

/* (+)include/asm/barrier.h */
#define wmb()           dsb(st) 
#define dsb(opt)        asm volatile("dsb " #opt : : : "memory")

/* (+)arch/arm64/include/asm/io.h */
#define writel_relaxed(v,c) \
   ((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
   
static inline void __raw_writel(u32 val, volatile void __iomem *addr)   
{
   asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr));
}

In Ada it becomes

with System.Machine_Code
...
procedure Io_Write_32 (Val : U32; Addr : Iomem_Access) is
   use System.Machine_Code;
begin
   Asm (Template => "dsb st",
        Clobber  => "memory",
        Volatile => True);

   Asm (Template => "str %w0, [%1]",
        Inputs   => (U32'Asm_Input ("rZ", Val), 
                     Iomem_Access'Asm_Input ("r", Addr)),
        Volatile => True);
end;

This Io_Write_32 implementation is not portable as we rebuilt the macro following the expansion tailored for arm64. A C wrapper would be less trouble while ensuring portability. Nevertheless, we felt this experiment was a good opportunity to show assembly directives in Ada.

That’s it!

I hope you appreciated this moderately dense overview of Ada in the context of Linux kernel module developpement. I think we can agree that Ada is a really disciplined and powerful contender when it comes to system, pedal to the metal, programming. I thank you for your time and concern. Do not hesitate to reach out and, happy Ada coding!

I want to thank Quentin Ochem, Nicolas Setton, Fabien Chouteau, Jerome Lambourg, Michael Frank, Derek Schacht, Arnaud Charlet, Pat Bernardi, Leo Germond, and Artium Nihamkin for their different insights and feedback to nail this experiment.


olivier henley
Olivier Henley

The author, Olivier Henley, is a UX Engineer at AdaCore. His role is exploring new markets through technical stories. Prior to joining AdaCore, Olivier was a consultant software engineer for Autodesk. Prior to that, Olivier worked on AAA game titles such as For Honor and Rainbow Six Siege in addition to many R&D gaming endeavors at Ubisoft Montreal. Olivier graduated from the Electrical Engineering program in Polytechnique Montreal. He is a co-author of patent US8884949B1, describing the invention of a novel temporal filter implicating NI technology. An Ada advocate, Olivier actively curates GitHub’s Awesome-Ada list.