Home Blog Page 117

How LF communities enable security measures required by the US Executive Order on Cybersecurity

Our communities take security seriously and have been instrumental in creating the tools and standards that every organization needs to comply with the recent US Executive Order

Overview

The US White House recently released its Executive Order (EO) on Improving the Nation’s Cybersecurity (along with a press call) to counter “persistent and increasingly sophisticated malicious cyber campaigns that threaten the public sector, the private sector, and ultimately the American people’s security and privacy.”

In this post, we’ll show what the Linux Foundation’s communities have already built that support this EO and note some other ways to assist in the future. But first, let’s put things in context.

The Linux Foundation’s Open Source Security Initiatives In Context

We deeply care about security, including supply chain (SC) security. The Linux Foundation is home to some of the most important and widely-used OSS, including the Linux kernel and Kubernetes. The LF’s previous Core Infrastructure Initiative (CII) and its current Open Source Security Foundation (OpenSSF) have been working to secure OSS, both in general and in widely-used components. The OpenSSF, in particular, is a broad industry coalition “collaborating to secure the open source ecosystem.”

The Software Package Data Exchange (SPDX) project has been working for the last ten years to enable software transparency and the exchange of software bill of materials (SBOM) data necessary for security analysis. SPDX is in the final stages of review to be an ISO standard, is supported by global companies with massive supply chains, and has a large open and closed source tooling support ecosystem. SPDX already meets the requirements of the executive order for SBOMs.

Finally, several LF foundations have focused on the security of various verticals. For example,  LF Public Health and LF Energy have worked on security in their respective sectors. Our cloud computing industry collaborating within CNCF has also produced a guide for supporting software supply chain best practices for cloud systems and applications.

Given that context, let’s look at some of the EO statements (in the order they are written) and how our communities have invested years in open collaboration to address these challenges.

Best Practices

The EO 4(b) and 4(c) says that

The “Secretary of Commerce [acting through NIST] shall solicit input from the Federal Government, private sector, academia, and other appropriate actors to identify existing or develop new standards, tools, and best practices for complying with the standards, procedures, or criteria [including] criteria that can be used to evaluate software security, include criteria to evaluate the security practices of the developers and suppliers themselves, and identify innovative tools or methods to demonstrate conformance with secure practices [and guidelines] for enhancing software supply chain security.” Later in EO 4(e)(ix) it discusses “attesting to conformity with secure software development practices.”

The OpenSSF’s CII Best Practices badge project specifically identifies best practices for OSS, focusing on security and including criteria to evaluate the security practices of developers and suppliers (it has over 3,800 participating projects). LF is also working with SLSA (currently in development) as potential additional guidance focused on addressing supply chain issues further.

Best practices are only useful if developers understand them, yet most software developers have never received education or training in developing secure software. The LF has developed and released its Secure Software Development Fundamentals set of courses available on edX to anyone at no cost. The OpenSSF Best Practices Working Group (WG) actively works to identify and promulgate best practices. We also provide a number of specific standards, tools, and best practices, as discussed below.

Encryption and Data Confidentiality

The EO 3(d) requires agencies to adopt “encryption for data at rest and in transit.” Encryption in transit is implemented on the web using the TLS (“https://”) protocol, and Let’s Encrypt is the world’s largest certificate authority for TLS certificates.

In addition, the LF Confidential Computing Consortium is dedicated to defining and accelerating the adoption of confidential computing. Confidential computing protects data in use (not just at rest and in transit) by performing computation in a hardware-based Trusted Execution Environment. These secure and isolated environments prevent unauthorized access or modification of applications and data while in use.

Supply Chain Integrity

The EO 4(e)(iii) states a requirement for

 “employing automated tools, or comparable processes, to maintain trusted source code supply chains, thereby ensuring the integrity of the code.” 

The LF has many projects that support SC integrity, in particular:

in-toto is a framework specifically designed to secure the integrity of software supply chains.

The Update Framework (TUF) helps developers maintain the security of software update systems, and is used in production by various tech companies and open source organizations.  

Uptane is a variant of TUF; it’s an open and secure software update system design which protects software delivered over-the-air to the computerized units of automobiles.

sigstore is a project to provide a public good / non-profit service to improve the open source software supply chain by easing the adoption of cryptographic software signing (of artifacts such as release files and container images) backed by transparency log technologies (which provide a tamper-resistant public log). 

We are also funding focused work on tools to ease signature and verify origins, e.g., we’re working to extend git to enable pluggable support for signatures, and the patatt tool provides an easy way to provide end-to-end cryptographic attestation to patches sent via email.

OpenChain (ISO 5230) is the International Standard for open source license compliance. Application of OpenChain requires identification of OSS components. While OpenChain by itself focuses more on licenses, that identification is easily reused to analyze other aspects of those components once they’re identified (for example, to look for known vulnerabilities).

Software Bill of Materials (SBOMs) support supply chain integrity; our SBOM work is so extensive that we’ll discuss that separately.

Software Bill of Materials (SBOMs)

Many cyber risks come from using components with known vulnerabilities. Known vulnerabilities are especially concerning in key infrastructure industries, such as the national fuel pipelines,  telecommunications networks, utilities, and energy grids. The exploitation of those vulnerabilities could lead to interruption of supply lines and service, and in some cases, loss of life due to a cyberattack.

One-time reviews don’t help since these vulnerabilities are typically found after the component has been developed and incorporated. Instead, what is needed is visibility into the components of the software environments that run these key infrastructure systems, similar to how food ingredients are made visible.

A Software Bill of Materials (SBOM) is a nested inventory or a list of ingredients that make up the software components used in creating a device or system. This is especially critical as it relates to a national digital infrastructure used within government agencies and in key industries that present national security risks if penetrated. Use of SBOMs would improve understanding of the operational and cyber risks of those software components from their originating supply chain.

The EO has extensive text about requiring a software bill of materials (SBOM) and tasks that depend on SBOMs:

EO 4(e) requires providing a purchaser an SBOM “for each product directly or by publishing it on a public website” and “ensuring and attesting… the integrity and provenance of open source software used within any portion of a product.” It also requires tasks that typically require SBOMs, e.g., “employing automated tools, or comparable processes, that check for known and potential vulnerabilities and remediate them, which shall operate regularly….” and “maintaining accurate and up-to-date data, provenance (i.e., origin) of software code or components, and controls on internal and third-party software components, tools, and services present in software development processes, and performing audits and enforcement of these controls on a recurring basis.” EO 4(f) requires publishing “minimum elements for an SBOM,” and EO 10(j) formally defines an SBOM as a “formal record containing the details and supply chain relationships of various components used in building software…  The SBOM enumerates [assembled] components in a product… analogous to a list of ingredients on food packaging.”

The LF has been developing and refining SPDX for over ten years; SPDX is used worldwide and is in the process of being approved as ISO/IEC Draft International Standard (DIS) 5962.  SPDX is a file format that identifies the software components within a larger piece of computer software and metadata such as the licenses of those components. SPDX 2.2 already supports the current guidance from the National Telecommunications and Information Administration (NTIA) for minimum SBOM elements. Some ecosystems have ecosystem-specific conventions for SBOM information, but SPDX can provide information across all arbitrary ecosystems.

SPDX is real and in use today, with increased adoption expected in the future. For example:

An NTIA “plugfest” demonstrated ten different producers generating SPDX. SPDX supports acquiring data from different sources (e.g., source code analysis, executables from producers, and analysis from third parties). A corpus of some LF projects with SPDX source SBOMs is available. Various LF projects are working to generate binary SBOMs as part of their builds, including yocto and Zephyr. To assist with further SPDX adoption, the LF is paying to write SPDX plugins for major package managers.

Vulnerability Disclosure

No matter what, some vulnerabilities will be found later and need to be fixed. EO 4(e)(viii) requires “participating in a vulnerability disclosure program that includes a reporting and disclosure process.” That way, vulnerabilities that are found can be reported to the organizations that can fix them. 

The CII Best Practices badge passing criteria requires that OSS projects specifically identify how to report vulnerabilities to them. More broadly, the OpenSSF Vulnerability Disclosures Working Group is working to help “mature and advocate well-managed vulnerability reporting and communication” for OSS. Most widely-used Linux distributions have a robust security response team, but the Alpine Linux distribution (widely used in container-based systems) did not. The Linux Foundation and Google funded various improvements to Alpine Linux, including a security response team.

We hope that the US will update its Vulnerabilities Equities Process (VEP) to work more cooperatively with commercial organizations, including OSS projects, to share more vulnerability information. Every vulnerability that the US fails to disclose is a vulnerability that can be found and exploited by attackers. We would welcome such discussions.

Critical Software

It’s especially important to focus on critical software — but what is critical software? EO 4(g) requires the executive branch to define “critical software,” and 4(h) requires the executive branch to “identify and make available to agencies a list of categories of software and software products… meeting the definition of critical software.”

Linux Foundation and the Laboratory for Innovation Science at Harvard (LISH) developed the report Vulnerabilities in the Core,’ a Preliminary Report and Census II of Open Source Software, which analyzed the use of OSS to help identify critical software. The LF and LISH are in the process of updating that report. The CII identified many important projects and assisted them, including OpenSSL (after Heartbleed), OpenSSH,  GnuPG, Frama-C, and the OWASP Zed Attack Proxy (ZAP). The OpenSSF Securing Critical Projects Working Group has been working to better identify critical OSS projects and to focus resources on critical OSS projects that need help. There is already a first-cut list of such projects, along with efforts to fund such aid.

Internet of Things (IoT)

Unfortunately, internet-of-things (IoT) devices often have notoriously bad security. It’s often been said that “the S in IoT stands for security.” 

EO 4(s) initiates a pilot program to “educate the public on the security capabilities of Internet-of-Things (IoT) devices and software development practices [based on existing consumer product labeling programs], and shall consider ways to incentivize manufacturers and developers to participate in these programs.” EO 4(t) states that such “IoT cybersecurity criteria” shall “reflect increasingly comprehensive levels of testing and assessment.”

The Linux Foundation develops and is home to many of the key components of IoT systems. These include:

The Linux kernel, used by many IoT devices. The yocto project, which creates custom Linux-based systems for IoT and embedded systems. Yocto supports full reproducible builds. EdgeX Foundry, which is a flexible OSS framework that facilitates interoperability between devices and applications at the IoT edge, and has been downloaded millions of times. The Zephyr project, which provides a real-time operating system (RTOS) used by many for resource-constrained IoT devices and is able to generate SBOM’s automatically during build. Zephyr is one of the few open source projects that is a CVE Numbering Authority.The seL4 microkernel, which is the most assured operating system kernel in the world; it’s notable for its comprehensive formal verification.

Security Labeling

EO 4(u) focuses on identifying:

“secure software development practices or criteria for a consumer software labeling program [that reflects] a baseline level of secure practices, and if practicable, shall reflect increasingly comprehensive levels of testing and assessment that a product may have undergone [and] identify, modify, or develop a recommended label or, if practicable, a tiered software security rating system.”

The OpenSSF’s CII Best Practices badge project (noted earlier) specifically identifies best practices for OSS development, and is already tiered (passing, silver, and gold). Over 3,800 projects currently participate.

There are also a number of projects that relate to measuring security and/or broader quality:

Community Health Analytics Open Source Software (CHAOSS) focuses on creating analytics and metrics to help define community health and identify risk The OpenSSF Security Metrics Project, which is in the process of development, was created to collect, aggregate, analyze, and communicate relevant security data about open source projects.The OpenSSF Security Reviews initiative provides a collection of security reviews of open source software.The OpenSSF Security Scorecards provide a set of automated pass/fail checks to provide a quick review of arbitrary OSS.

Conclusion

The Linux Foundation (LF) has long been working to help improve the security of open source software (OSS), which powers systems worldwide. We couldn’t do this without the many contributions of time, money, and other resources from numerous companies and individuals; we gratefully thank them all.  We are always delighted to work with anyone to improve the development and deployment of open source software, which is important to us all.

David A. Wheeler, Director of Open Source Supply Chain Security at the Linux Foundation

The post How LF communities enable security measures required by the US Executive Order on Cybersecurity appeared first on Linux Foundation.

How LF communities enable security measures required by the US Executive Order on Cybersecurity

Our communities take security seriously and have been instrumental in creating the tools and standards that every organization needs to comply with the recent US Executive Order

Overview

The US White House recently released its Executive Order (EO) on Improving the Nation’s Cybersecurity (along with a press call) to counter “persistent and increasingly sophisticated malicious cyber campaigns that threaten the public sector, the private sector, and ultimately the American people’s security and privacy.”

In this post, we’ll show what the Linux Foundation’s communities have already built that support this EO and note some other ways to assist in the future. But first, let’s put things in context.

The Linux Foundation’s Open Source Security Initiatives In Context

We deeply care about security, including supply chain (SC) security. The Linux Foundation is home to some of the most important and widely-used OSS, including the Linux kernel and Kubernetes. The LF’s previous Core Infrastructure Initiative (CII) and its current Open Source Security Foundation (OpenSSF) have been working to secure OSS, both in general and in widely-used components. The OpenSSF, in particular, is a broad industry coalition “collaborating to secure the open source ecosystem.”

The Software Package Data Exchange (SPDX) project has been working for the last ten years to enable software transparency and the exchange of software bill of materials (SBOM) data necessary for security analysis. SPDX is in the final stages of review to be an ISO standard, is supported by global companies with massive supply chains, and has a large open and closed source tooling support ecosystem. SPDX already meets the requirements of the executive order for SBOMs.

Finally, several LF foundations have focused on the security of various verticals. For example, LF Public Health and LF Energy have worked on security in their respective sectors. Our cloud computing industry collaborating within CNCF has also produced a guide for supporting software supply chain best practices for cloud systems and applications.

Given that context, let’s look at some of the EO statements (in the order they are written) and how our communities have invested years in open collaboration to address these challenges.

Best Practices

The EO 4(b) and 4(c) says that

The “Secretary of Commerce [acting through NIST] shall solicit input from the Federal Government, private sector, academia, and other appropriate actors to identify existing or develop new standards, tools, and best practices for complying with the standards, procedures, or criteria [including] criteria that can be used to evaluate software security, include criteria to evaluate the security practices of the developers and suppliers themselves, and identify innovative tools or methods to demonstrate conformance with secure practices [and guidelines] for enhancing software supply chain security.” Later in EO 4(e)(ix) it discusses “attesting to conformity with secure software development practices.”

The OpenSSF’s CII Best Practices badge project specifically identifies best practices for OSS, focusing on security and including criteria to evaluate the security practices of developers and suppliers (it has over 3,800 participating projects). LF is also working with SLSA (currently in development) as potential additional guidance focused on addressing supply chain issues further.

Best practices are only useful if developers understand them, yet most software developers have never received education or training in developing secure software. The LF has developed and released its Secure Software Development Fundamentals set of courses available on edX to anyone at no cost. The OpenSSF Best Practices Working Group (WG) actively works to identify and promulgate best practices. We also provide a number of specific standards, tools, and best practices, as discussed below.

Encryption and Data Confidentiality

The EO 3(d) requires agencies to adopt “encryption for data at rest and in transit.” Encryption in transit is implemented on the web using the TLS (“https://”) protocol, and Let’s Encrypt is the world’s largest certificate authority for TLS certificates.

In addition, the LF Confidential Computing Consortium is dedicated to defining and accelerating the adoption of confidential computing. Confidential computing protects data in use (not just at rest and in transit) by performing computation in a hardware-based Trusted Execution Environment. These secure and isolated environments prevent unauthorized access or modification of applications and data while in use.

Supply Chain Integrity

The EO 4(e)(iii) states a requirement for

 “employing automated tools, or comparable processes, to maintain trusted source code supply chains, thereby ensuring the integrity of the code.” 

The LF has many projects that support SC integrity, in particular:

in-toto is a framework specifically designed to secure the integrity of software supply chains.

The Update Framework (TUF) helps developers maintain the security of software update systems, and is used in production by various tech companies and open source organizations.

Uptane is a variant of TUF; it’s an open and secure software update system design which protects software delivered over-the-air to the computerized units of automobiles.

sigstore is a project to provide a public good / non-profit service to improve the open source software supply chain by easing the adoption of cryptographic software signing (of artifacts such as release files and container images) backed by transparency log technologies (which provide a tamper-resistant public log).

We are also funding focused work on tools to ease signature and verify origins, e.g., we’re working to extend git to enable pluggable support for signatures, and the patatt tool provides an easy way to provide end-to-end cryptographic attestation to patches sent via email.

OpenChain (ISO 5230) is the International Standard for open source license compliance. Application of OpenChain requires identification of OSS components. While OpenChain by itself focuses more on licenses, that identification is easily reused to analyze other aspects of those components once they’re identified (for example, to look for known vulnerabilities).

Software Bill of Materials (SBOMs) support supply chain integrity; our SBOM work is so extensive that we’ll discuss that separately.

Software Bill of Materials (SBOMs)

Many cyber risks come from using components with known vulnerabilities. Known vulnerabilities are especially concerning in key infrastructure industries, such as the national fuel pipelines,  telecommunications networks, utilities, and energy grids. The exploitation of those vulnerabilities could lead to interruption of supply lines and service, and in some cases, loss of life due to a cyberattack.

One-time reviews don’t help since these vulnerabilities are typically found after the component has been developed and incorporated. Instead, what is needed is visibility into the components of the software environments that run these key infrastructure systems, similar to how food ingredients are made visible.

A Software Bill of Materials (SBOM) is a nested inventory or a list of ingredients that make up the software components used in creating a device or system. This is especially critical as it relates to a national digital infrastructure used within government agencies and in key industries that present national security risks if penetrated. Use of SBOMs would improve understanding of the operational and cyber risks of those software components from their originating supply chain.

The EO has extensive text about requiring a software bill of materials (SBOM) and tasks that depend on SBOMs:

EO 4(e) requires providing a purchaser an SBOM “for each product directly or by publishing it on a public website” and “ensuring and attesting… the integrity and provenance of open source software used within any portion of a product.” It also requires tasks that typically require SBOMs, e.g., “employing automated tools, or comparable processes, that check for known and potential vulnerabilities and remediate them, which shall operate regularly….” and “maintaining accurate and up-to-date data, provenance (i.e., origin) of software code or components, and controls on internal and third-party software components, tools, and services present in software development processes, and performing audits and enforcement of these controls on a recurring basis.” EO 4(f) requires publishing “minimum elements for an SBOM,” and EO 10(j) formally defines an SBOM as a “formal record containing the details and supply chain relationships of various components used in building software…  The SBOM enumerates [assembled] components in a product… analogous to a list of ingredients on food packaging.”

The LF has been developing and refining SPDX for over ten years; SPDX is used worldwide and has is in the process of being approved as ISO/IEC Draft International Standard (DIS) 5962.  SPDX is a file format that identifies the software components within a larger piece of computer software and metadata such as the licenses of those components. SPDX 2.2 already supports the current guidance from the National Telecommunications and Information Administration (NTIA) for minimum SBOM elements. Some ecosystems have ecosystem-specific conventions for SBOM information, but SPDX can provide information across all arbitrary ecosystems.

SPDX is real and in use today, with increased adoption expected in the future. For example:

An NTIA “plugfest” demonstrated ten different producers generating SPDX. SPDX supports acquiring data from different sources (e.g., source code analysis, executables from producers, and analysis from third parties). A corpus of some LF projects with SPDX source SBOMs is available. Various LF projects are working to generate binary SBOMs as part of their builds, including yocto and Zephyr. To assist with further SPDX adoption, the LF is paying to write SPDX plugins for major package managers.

Vulnerability Disclosure

No matter what, some vulnerabilities will be found later and need to be fixed. EO 4(e)(viii) requires “participating in a vulnerability disclosure program that includes a reporting and disclosure process.” That way, vulnerabilities that are found can be reported to the organizations that can fix them.

The CII Best Practices badge passing criteria requires that OSS projects specifically identify how to report vulnerabilities to them. More broadly, the OpenSSF Vulnerability Disclosures Working Group is working to help “mature and advocate well-managed vulnerability reporting and communication” for OSS. Most widely-used Linux distributions have a robust security response team, but the Alpine Linux distribution (widely used in container-based systems) did not. The Linux Foundation and Google funded various improvements to Alpine Linux, including a security response team.

We hope that the US will update its Vulnerabilities Equities Process (VEP) to work more cooperatively with commercial organizations, including OSS projects, to share more vulnerability information. Every vulnerability that the US fails to disclose is a vulnerability that can be found and exploited by attackers. We would welcome such discussions.

Critical Software

It’s especially important to focus on critical software — but what is critical software? EO 4(g) requires the executive branch to define “critical software,” and 4(h) requires the executive branch to “identify and make available to agencies a list of categories of software and software products… meeting the definition of critical software.”

Linux Foundation and the Laboratory for Innovation Science at Harvard (LISH) developed the report Vulnerabilities in the Core,’ a Preliminary Report and Census II of Open Source Software, which analyzed the use of OSS to help identify critical software. The LF and LISH are in the process of updating that report. The CII identified many important projects and assisted them, including OpenSSL (after Heartbleed), OpenSSH,  GnuPG, Frama-C, and the OWASP Zed Attack Proxy (ZAP). The OpenSSF Securing Critical Projects Working Group has been working to better identify critical OSS projects and to focus resources on critical OSS projects that need help. There is already a first-cut list of such projects, along with efforts to fund such aid.

Internet of Things (IoT)

Unfortunately, internet-of-things (IoT) devices often have notoriously bad security. It’s often been said that “the S in IoT stands for security.”

EO 4(s) initiates a pilot program to “educate the public on the security capabilities of Internet-of-Things (IoT) devices and software development practices [based on existing consumer product labeling programs], and shall consider ways to incentivize manufacturers and developers to participate in these programs.” EO 4(t) states that such “IoT cybersecurity criteria” shall “reflect increasingly comprehensive levels of testing and assessment.”

The Linux Foundation develops and is home to many of the key components of IoT systems. These include:

The Linux kernel, used by many IoT devices. The yocto project, which creates custom Linux-based systems for IoT and embedded systems. Yocto supports full reproducible builds. EdgeX Foundry, which is a flexible OSS framework that facilitates interoperability between devices and applications at the IoT edge, and has been downloaded millions of times. The Zephyr project, which provides a real-time operating system (RTOS) used by many for resource-constrained IoT devices and is able to generate SBOM’s automatically during build. Zephyr is one of the few open source projects that is a CVE Numbering Authority.The seL4 microkernel, which is the most assured operating system kernel in the world; it’s notable for its comprehensive formal verification.

Security Labeling

EO 4(u) focuses on identifying:

“secure software development practices or criteria for a consumer software labeling program [that reflects] a baseline level of secure practices, and if practicable, shall reflect increasingly comprehensive levels of testing and assessment that a product may have undergone [and] identify, modify, or develop a recommended label or, if practicable, a tiered software security rating system.”

The OpenSSF’s CII Best Practices badge project (noted earlier) specifically identifies best practices for OSS development, and is already tiered (passing, silver, and gold). Over 3,800 projects currently participate.

There are also a number of projects that relate to measuring security and/or broader quality:

Community Health Analytics Open Source Software (CHAOSS) focuses on creating analytics and metrics to help define community health and identify risk The OpenSSF Security Metrics Project, which is in the process of development, was created to collect, aggregate, analyze, and communicate relevant security data about open source projects.The OpenSSF Security Reviews initiative provides a collection of security reviews of open source software.The OpenSSF Security Scorecards provide a set of automated pass/fail checks to provide a quick review of arbitrary OSS.

Conclusion

The Linux Foundation (LF) has long been working to help improve the security of open source software (OSS), which powers systems worldwide. We couldn’t do this without the many contributions of time, money, and other resources from numerous companies and individuals; we gratefully thank them all.  We are always delighted to work with anyone to improve the development and deployment of open source software, which is important to us.

David A. Wheeler, Director of Open Source Supply Chain Security at the Linux Foundation

The post How LF communities enable security measures required by the US Executive Order on Cybersecurity appeared first on Linux Foundation.

How WASI Makes Containerization More Efficient

By Marco Fioretti

WebAssembly, or Wasm for brevity, is a standardized binary format that allows software written in any language to run without customizations on any platform, inside sandboxes or runtimes – that is virtual machines – at near native speed. Since those runtimes are isolated from their host environment, a WebAssembly System Interface (WASI) gives developers – who adopt Wasm exactly to be free to write software once, but ignoring where it will run – a single, standard way to call the low-level functions that are present on any platform.

The previous article in this series describes the goals, design principles and architecture of WASI. This time, we present real-world, usable projects and services based on WASI, that also clarify its role in the big picture: to facilitate the containerization of virtually any application, much more efficiently than bulkier containers like Docker may do.

Coding with WASI is only half the job

Programmers can already write and compile code, for example in C or Rust, to create .wasm modules usable in any WASI-compliant environment. The problem is, do we already have runtimes that can actually execute those modules “outside web browsers”? The answer is yes, and more than one. One general-purpose solution is Wasmtime, from the Bytecode Alliance. This project develops a WASI-compliant runtime for Wasm modules that may be used standalone, as a command line tool, or be embedded into other applications, as a library: at the moment, besides plain Bash, Wasmtime is usable from Rust, C, Python, .NET and Go.

Other WASI runtimes are more or less optimized for particular use cases, or programming communities. The following examples give an idea of what is possible, without pretense at completeness.

WASI on servers, or REPLACING some servers

Wasmer is a Rust, open-source Wasm runtime, whose 1.0 version was released in January 2021. Wasmer is specifically designed to run – on generic servers – .Wasm modules that use WASI methods to interact with native functions of the host operating system.

Besides a standalone runtime that may run Wasm binaries on any platform and chipset, Wasmer is designed, like Wasmtime, to allow the use of Wasm modules from many other languages, starting from C/C++, Rust, Python, Go, PHP and Ruby.

To prove its capabilities, the developers of Wasmer have compiled as a .wasm module – and then actually run – an unmodified version of the nGinx web server, obviously using WASI calls to interact with the host system.

Wasmer also is the first Wasm runtime to fully support both WASI and high performance programming with the Single Instruction, Multiple Data technique (SIMD): in 2019, the two technologies were used together, with very interesting results, to emulate particle physics. Wasmer developers also participate in work to run Wasm modules on the Linux kernel to execute securely, via WASI, tasks that would otherwise need more checks and more context switching; that is performance hits.

Artificial Intelligence, faster than Docker and simpler than Node.js

Second State has developed another virtual machine to run server-side applications “safer and 10x faster than Docker”, called SSVM. What is particularly interesting in the SSVM runtime is why and how it added and optimized support for WebAssembly and WASI: direct access to hardware to provide Artificial Intelligence and machine learning “as a service in Node.js, written in Rust, over the Web”. Typical applications, running up to 25 times faster than equivalent Python code, include recognition of images and other patterns.

The SSVM toolchain can be used also to create Wasm modules for Deno. This is a Rust runtime for JavaScript and TypeScript created to address the “10 things the creator of Node.js regrets about it”, and supports WASI for Wasm modules that need to access system resources.

WASI gaming and more, right at the cloud edge

Fastly, an edge cloud platform provider, has developed and then released as Open Source its own WebAssembly compiler and runtime, called Lucet. Fastly created this tool specifically to support faster and safer execution of the code that its customers write in any language, for the several use cases of the Fastly platform. To show the capabilities of Wasm and WASI in edge computing, a Fastly engineer recently announced that he has ported the Doom first-person shooter game to run on Fastly’s edge cloud.

WebAssembly and containers? What’s the difference?

Using WASI and the already mentioned Wasmtime, it is possible both to run Wasm modules from .NET Core Applications, and to generate modules in the same format from .NET’s Roslyn compiler. Even more interesting are Microsoft’s Krustlets, that is “Kubernetes Rust kubelets”. These are a way to orchestrate and run WebAssembly “workloads” alongside standard containers, with Kubernetes. In other words Wasm and WASI can already enable the orchestration, with standard systems like Kubernetes, of thousands of generic applications, each isolated at least like with traditional containers – and side by side with them if needed – but with much smaller overhead.

A WASI-driven Internet of Things

The possibility to execute the same binary format on extremely efficient virtual machines that run on many different platforms means even more than it may seem at first sight, because:

“a WASI-enabled JavaScript runtime and simple firmware may keep a device’s software in sync with a cloud-hosted or locally hosted repository”.

In case you haven’t noticed, procedures like that may make automatic testing and deployment of new firmware or software for IoT, or any remote device, really, much easier and reliable than they are today. If a remote device can run WebAssembly bytecode, any developer may reliably write and test new software for it, simply using “basic simulators with digital twins” of that device, as discussed here. Isn’t WASI… interesting?

The post How WASI Makes Containerization More Efficient appeared first on Linux Foundation – Training.

A practical view of the xargs command

A practical view of the xargs command

Create your own custom command line arguments with the xargs command.
Alex Callejas
Wed, 5/12/2021 at 3:51pm

Image

Image by Gerd Altmann from Pixabay

The day-to-day tasks of the sysadmin are always different for everyone; however, there are simple tasks that are executed equally on managed systems. In the days when disk space was a risk factor in the administrator’s day, it was vitally important to locate the directory or filesystem to debug.

Topics:  
Linux  
Linux Administration  
Scripting  
Read More at Enable Sysadmin

Hyperledger Announces 2021 Blockchain Brand Survey

Hyperledger, a Linux Foundation project that was established in 2015, is an open source collaborative effort created to advance cross-industry blockchain technologies. It is a global collaboration including participation from leaders in finance, banking, healthcare, supply chains, manufacturing, and technology. 

Together with Linux Foundation Research, Hyperledger is conducting a survey to measure the market awareness and perceptions of Hyperledger and its projects relative to other blockchain platforms used in the technology industry, specifically identifying myths and misperceptions. Additionally, the survey seeks to help Hyperledger articulate the perceived time to production readiness for products and understand motivations for developers that both use and contribute to Hyperledger technologies.

Participants who complete the survey will receive a 50 percent discount on attendance to Hyperledger Global Forum, June 8-10, 2021

Please participate now; we intend to close the survey in early June. 

Privacy and confidentiality are important to us. Neither participant names, nor their company names, will be displayed in the final results. 

This survey should take no more than 20 minutes of your time.

To take the 2021 Hyperledger Market Survey, click the button below:

Thanks to our survey partner Linux Foundation Japan.

SURVEY GOALS

Thank you for taking the time to participate in this survey conducted by Hyperledger, an open source project at the Linux Foundation focused on developing a suite of stable frameworks, tools, and libraries for enterprise-grade blockchain deployments.

Hyperledger and its affiliated projects are hosted by the Linux Foundation.

This survey will provide insights into the challenges, familiarity, and misconceptions about Hyperledger and its suite of technologies. We hope these insights will help guide us in the growth and expansion of marketing and recruitment efforts to help grow projects and our community.

This survey will provide insights into:

What is the awareness, familiarity, and understanding of Hyperledger overall and by project?What are the myths and misperceptions of Hyperledger (e.g., around what it seeks to achieve  (e.g., the number of projects, who is involved and who the competitors are)?How likely are respondents to purchase or adopt blockchain technology?What is the appeal of joining the Hyperledger community?What are the perceptions of business blockchain technology?What is the perceived time to production readiness?What are developers’ motivations for contributing to /using Hyperledger?

PRIVACY

Your name and company name will not be displayed. Reviews are attributed to your role, company size, and industry. Responses will be subject to the Linux Foundation’s Privacy Policy, available at https://linuxfoundation.org/privacy. Please note that members of the Hyperledger survey committee who are not LF employees will review the survey results. 

VISIBILITY

We will summarize the survey data and share the findings during the Hyperledger Member Summit later in the year. The summary report will be published on the Hyperledger and Linux Foundation websites. In addition, we will be producing an in-depth report of the survey which will be shared with Hyperledger membership.

QUESTIONS

If you have questions regarding this survey, please email us at survey@hyperledger.org

Sign up for the Hyperledger Newsletter at https://hyperledger.org 

The post Hyperledger Announces 2021 Blockchain Brand Survey appeared first on Linux Foundation.

Recursive Vim macros: One step further into automating repetitive tasks

Take Vim to the limit with recursive macros.
Read More at Enable Sysadmin

What is your server hardware refresh schedule?

Tech refresh is a continuously occurring task for sysadmins, but what’s your corporate refresh schedule look like?
Read More at Enable Sysadmin

What will technology look like in 30 years?

Can we predict the technology of the future? 
Read More at Enable Sysadmin

Open Source API Gateway KrakenD Becomes Linux Foundation Project

KrakenD framework becomes the Lura Project and gets home at Linux Foundation where it will be the only enterprise-grade API Gateway hosted in a neutral, open forum

SAN FRANCISCO, May 11, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced it is hosting the Lura Project, formerly the KrakenD open source project. Lura is a framework for building Application Programming Interfaces (API) Gateways that goes beyond simple reverse proxy, functioning as an aggregator for many microservices and is a declarative tool for creating endpoints. 

Partners include 99P Labs (backed by Ohio State University), Ardan Studios, Hepsiburada, Openroom, Postman, Skalena and Stayforlong. 

“By being hosted at the Linux Foundation, the Lura Project will extend the legacy of the KrakenD open source framework and be better poised to support its massive adoption among more than one million servers every month,” said Albert Lombarte, CEO, KrakenD. “The Foundation’s open governance model will accelerate development and community support for this amazing success.”

API Gateways have become even more valuable as the necessary fabric for connecting cloud applications and services in hybrid environments. KrakenD was created five years ago as a library for engineers to create fast and reliable API Gateways. It has been in production among some of the world’s largest Internet businesses since 2016 As the Lura Project, it is a stateless, distributed, high-performance API Gateway that enables microservices adoption. 

“The Lura Project is an essential connection tissue for applications and services across open source cloud projects and so it’s a natural decision to host it at the Linux Foundation,” said Mike Dolan, senior vice president and general manager of Projects at the Linux Foundation. “We’re looking forward to providing the open governance structure to support Lura Project’s massive growth.” 

For more information about the Lura Project, please visit: https://www.luraproject.org

Supporting Comments

Ardan Studios

“I’m excited to hear that KrakenD API Gateway is being brought into the family of open source projects managed by the Linux Foundation. I believe this shows the global community the commitment KrakenD has to keeping their technology open source and free to use. With the adoption that already exists, and this new promise towards the future, I expect amazing things for the product and the community around it,” said William Kennedy, Managing Partner at Ardan Studios.

Hepsiburada

“At Hepsiburada we have a massive amount of traffic and a complex ecosystem of around 500 microservices and different datacenters. Adding KrakenD to our Kubernetes clusters has helped us reduce the technical and organizational challenges of dealing with a vast amount of resources securely and easily. We have over 800 containers running with KrakenD and looking forward to having more,” said Alper Hankendi, Engineering Director Hepsiburada.

Openroom

“KrakenD allowed us to focus on our backend and deploy a secure and performant system in a few days. After more than 2 years of use in production and 0 crash or malfunction, it also has proven its robustness,” said Jonathan Muller, CTO Openroom Inc.

Postman

“KrakenD represents a renaissance of innovation and investment in the API gateway and management space by challenging the established players with a more lightweight, high performance, and modern gateway for API publisher to put to work across their API operations, while also continuing to establish the LInux Foundation as the home for open API specifications and tooling that are continuing to touch and shape almost every business sector today,” said Kin Lane, chief evangelist, Postman.

Stayforlong

“KrakenD makes it easier for us to manage authentication, filter bots, and integrate our apps. It has proved to be stable and reliable since day one. It is wonderful!” said Raúl M. Sillero, CTO Stayforlong.com.

Skalena

“The Opensource model always was a great proof of innovation and nowadays a synonym of high-quality products and incredible attention with the real needs from the market (Customer Experience). The Linux Foundation is one of the catalysts of incredible solutions, and KrakenD and now Lura would not have a better place to be. With this move, I am sure that it is a start of a new era for this incredible solution in the API Gateway space,  the market will be astonished by a lot of good things about to come,” said Edgar Silva, founder and partner at Skalena. 

About The Linux Foundation

Founded in 2000, The Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. The Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Jennifer Cloer
for the Linux Foundation
503-867-2304
jennifer@storychangesculture.com

The post Open Source API Gateway KrakenD Becomes Linux Foundation Project appeared first on Linux Foundation.

Open Source API Gateway KrakenD Becomes Linux Foundation Project

KrakenD framework becomes the Lura Project and gets home at Linux Foundation where it will be the only enterprise-grade API Gateway hosted in a neutral, open forum

SAN FRANCISCO, May 11, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced it is hosting the Lura Project, formerly the KrakenD open source project. Lura is a framework for building Application Programming Interfaces (API) Gateways that goes beyond simple reverse proxy, functioning as an aggregator for many microservices and is a declarative tool for creating endpoints. 

Partners include 99P Labs (supported by Honda and The Ohio State University), Ardan Studios, Hepsiburada, Openroom, Postman, Skalena and Stayforlong. 

“By being hosted at the Linux Foundation, the Lura Project will extend the legacy of the KrakenD open source framework and be better poised to support its massive adoption among more than one million servers every month,” said Albert Lombarte, CEO, KrakenD. “The Foundation’s open governance model will accelerate development and community support for this amazing success.”

API Gateways have become even more valuable as the necessary fabric for connecting cloud applications and services in hybrid environments. KrakenD was created five years ago as a library for engineers to create fast and reliable API Gateways. It has been in production among some of the world’s largest Internet businesses since 2016 As the Lura Project, it is a stateless, distributed, high-performance API Gateway that enables microservices adoption. 

“The Lura Project is an essential connection tissue for applications and services across open source cloud projects and so it’s a natural decision to host it at the Linux Foundation,” said Mike Dolan, senior vice president and general manager of Projects at the Linux Foundation. “We’re looking forward to providing the open governance structure to support Lura Project’s massive growth.” 

For more information about the Lura Project, please visit: https://www.luraproject.org

Supporting Comments

Ardan Studios

“I’m excited to hear that KrakenD API Gateway is being brought into the family of open source projects managed by the Linux Foundation. I believe this shows the global community the commitment KrakenD has to keeping their technology open source and free to use. With the adoption that already exists, and this new promise towards the future, I expect amazing things for the product and the community around it,” said William Kennedy, Managing Partner at Ardan Studios.

Hepsiburada

“At Hepsiburada we have a massive amount of traffic and a complex ecosystem of around 500 microservices and different datacenters. Adding KrakenD to our Kubernetes clusters has helped us reduce the technical and organizational challenges of dealing with a vast amount of resources securely and easily. We have over 800 containers running with KrakenD and looking forward to having more,” said Alper Hankendi, Engineering Director Hepsiburada.

Openroom

“KrakenD allowed us to focus on our backend and deploy a secure and performant system in a few days. After more than 2 years of use in production and 0 crash or malfunction, it also has proven its robustness,” said Jonathan Muller, CTO Openroom Inc.

Postman

“KrakenD represents a renaissance of innovation and investment in the API gateway and management space by challenging the established players with a more lightweight, high performance, and modern gateway for API publisher to put to work across their API operations, while also continuing to establish the LInux Foundation as the home for open API specifications and tooling that are continuing to touch and shape almost every business sector today,” said Kin Lane, chief evangelist, Postman.

Stayforlong

“KrakenD makes it easier for us to manage authentication, filter bots, and integrate our apps. It has proved to be stable and reliable since day one. It is wonderful!” said Raúl M. Sillero, CTO Stayforlong.com.

Skalena

“The Opensource model always was a great proof of innovation and nowadays a synonym of high-quality products and incredible attention with the real needs from the market (Customer Experience). The Linux Foundation is one of the catalysts of incredible solutions, and KrakenD and now Lura would not have a better place to be. With this move, I am sure that it is a start of a new era for this incredible solution in the API Gateway space,  the market will be astonished by a lot of good things about to come,” said Edgar Silva, founder and partner at Skalena. 

About The Linux Foundation

Founded in 2000, The Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. The Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Jennifer Cloer
for the Linux Foundation
503-867-2304
jennifer@storychangesculture.com

The post Open Source API Gateway KrakenD Becomes Linux Foundation Project appeared first on Linux Foundation.