Home Blog Page 109

3 tips for Linux process performance improvement with priority and affinity

A quick look at how to pull more performance out of your system.
Read More at Enable Sysadmin

What is a sysadmin?

What is a sysadmin?

Who is a sysadmin? Why is a sysadmin? Long-time Linux geek and sysadmin David Both explains what being a sysadmin means to him.
David Both
Fri, 6/18/2021 at 10:46am

Image

Photo by hitesh choudhary from Pexels

What does the term “sysadmin” or “system administrator” mean to you? What does a sysadmin do? How do you know you are one?

Existential questions like this seldom have concrete answers, and the answers that do present themselves can be pretty fluid. The answers are also significantly different for different people and depend upon their work experience.

Are you a sysadmin?

Topics:  
Linux Administration  
Sysadmin culture  
Linux  
Read More at Enable Sysadmin

Determining the Source of Truth for Software Components

Abstract: Having access to a list of software components and their respective meta-data is critical to performing various DevOps tasks successfully. After considering the varying requirements of the different tasks, we determined that representing a software component as a “collection of files” provided an optimal representation. Conversely, when file-level information is missing, most tasks become more costly or outright impossible to complete.

Introduction

Having access to the list of software components that comprise a software solution, sometimes referred to as the Software Bill of Materials (SBOM), is a requirement for the successful execution of the following DevOps tasks:

  • Open Source and Third-party license compliance
  • Security Vulnerability Management
  • Malware Protection
  • Export Compliance
  • Functionally Safe Certification

A community effort, led by the National Telecommunications and Information Administration (NTIA) [1], is underway to create an SBOM exchange format driven mainly by the Security Vulnerability Management task. The holy grail of an effective SBOM design is twofold:

  1. Define a commonly agreed-upon data structure that best represents a software       component and;
  2. Devise a method that uniquely and effectively identifies each software component.

A component must represent a broad spectrum of software types including (but not limited to): a single source file, a library, an application executable, a container, a Linux runtime, or a more complex system that is composed of some combination of these types. For instance, a collection of source files (e.g., busybox 1.27.2) or a collection of three containers (e.g., a half dozen scripts and documentation) are two examples.

Because we must handle an eclectic range of component types, finding the right granular level of representation is critical. If it is too large, we will not be able to represent all the component types and the corresponding meta-information required to support the various DevOps tasks. On the other hand, it may add unnecessary complexity, cost, and friction to adoption if it is too small.

Traditionally, components have been represented at the software package or archive level, where the name and version are the primary means of identifying the component. This has several challenges, with the two biggest ones being:

  1. The fact that two different software components can have the same name yet be different, and
  2. Conversely, two copies of software with different names could be identical.

Another traditional method is to rely on the hash of the software using one of several methods – e.g., SHA1, SHA256, or MD5. This works well when your software component represents a single file, but it presents a problem when describing more complex components composed of multiple files. For example, the same collection of source files (e.g., busybox-1.27.2 [2]) could be packaged using different archive methods (e.g., .zip, .gz, .bz2), resulting in the same set of files having different hashes due to the different archive methods used. 

After considering the different requirements for the various DevOps tasks listed above, and given the broad range of software component types, we concluded that representing a software component as a “collection of files” where the “file” serves as the atomic unit provides an optimal representation. 

This granular level enables access to metadata at the file level, leading to a higher quality outcome when performing the various DevOps tasks (e.g., file-level licensing for license compliance, file-level vulnerability data for security management, and file-level cryptography info for export compliance).  To compute the unique identifier for a given component we recommend taking the “hash” of all the “file hashes” of the files that comprise a component. This enables unique identification independent of how the files are packaged. We discuss this approach in more detail in the sections that follow.

Why the File Level Matters

To obtain the most accurate information to support the various DevOps tasks sufficiently, one would need access to metadata at the atomic file level. This should not be surprising given that files serve as the building blocks from which software is built. If we represented software at any higher level (e.g., just name and version), pertinent information would be lost. 

License Compliance

If you want to understand all the licenses that impose obligations and restrictions on a program or library, you will need to know all the licenses of the files from which it was built (derived). There are many instances where, although an open source software component’s top level license is declared to be one license, it is common to find a half dozen or more other licenses within the codebase which typically impose additional obligations.  Popular open source projects usually borrow from other projects with different licenses. The open sharing of code is the force behind the success of the Open Source movement. For this reason, we must accept license diversity as the rule rather than the exception.  

This means that a project is often subject to the obligations of multiple licenses. Consider the impact of this on the use of busybox, which provides a lot of latitude regarding the features included in a build. How one configures busybox will determine which files are used. Knowing which files are used is the only way to know which licenses are applicable. For instance, although the top-level license is GPL-2.0, the source file math.c [3] has three licenses governing it (GPL-2.0, MIT, and BSD) because it was derived from three different projects.  

If one distributed a solution that includes an instance of busybox derived from math.c and provided a written offer for source code, one would need to reproduce their respective license notices in the documentation to comply with the MIT and BSD licenses. Furthermore, we have recently seen an open source component with Apache as the top-level license, yet deep within the bowels of the source code lies a set of proprietary files. These examples illustrate why having file level information is mission-critical.

Security Vulnerability Management

The Heartbleed vulnerability was identified within the OpenSSL component in 2014. Many web servers used OpenSSL to provide secure communication between a browser and a website. If left unpatched, it would allow attackers unprecedented access to sensitive information such as login and password credentials [4]. This vulnerability could be isolated to a single line within a single file. Therefore, the easiest and most definitive way to understand whether one was exposed was to determine whether their instance of OpenSSL was built using that file.

The Amnesia:33 vulnerability announcement [5], reported in November 2020, suggested that any software solution that included the FNET component was affected. With only the name and version of the FNET component to go on, one would have incorrectly concluded the Zephyr LTS 1.14 operating system was vulnerable.  However, by examining the file level source, one could have quickly determined the impacted files were not part of the Zephyr build, making it definitively clear that Zephyr was not in fact vulnerable [6].  Having to conduct a product recall when a product is not affected would be highly unproductive and costly. However, in the absence of file-level information, the analysis would not be possible and would have likely caused unnecessary worry, work, and cost. These examples further illustrate why having access to file-level information is mission-critical.

Export Compliance

The output quality of an Export Compliance program also depends on having access to file-level data. Although different governments have different rules and requirements concerning software export license compliance, most policies center around the use of cryptography methods and algorithms. To understand which cryptography libraries and algorithms are implemented, one needs to inspect the file-level source code. Depending on how a given software solution is built and which cryptography-based files are used (or not used), one should classify the software concerning the different jurisdiction policies. Having access to file-level data would also enable one to determine the classification for any given jurisdiction dynamically. The requirements of the export compliance task also mean that knowing what is at the file level is mission-critical.

Functional Safety

The objective of the functional safety software certification process is to mitigate the unacceptable risk of physical injury or of damage to the health of people and/or property. The standards that govern functional safety (e.g., ISO/IEC 61508, ISO 26262, …) require that the full system context is known to assess and mitigate risk successfully. Therefore, full system transparency requires verification and validation at the source file level, which includes understanding all the source code and build tools used, and how it was configured. The inclusion of components of unknown content and provenance would increase risk and prohibit most certifications. Thus, functionally safe certification represents yet another task where having file-level information becomes mission-critical.

Component Identification

One of the biggest challenges in managing software components is the ability to identify each one uniquely. Developing a high confidence method ensures that two copies of a component are the same when they represent identical content and different if the content is not identical. Furthermore, we want to avoid creating a dependency on a central component registry as a requirement for determining a component’s identifier. Therefore, an additional requirement is to be able to compute a unique identifier simply by examining the component’s contents.

Understanding a component’s file-level composition can play a critical role in designing such a method. Recall that our goal is to allow a software component to represent a wide spectrum of component types ranging from a single source file to a collection of containers and other files. Each component could therefore be broken down into a collection of files. This representation enables the construction of a method that can uniquely identify any given component. 

File hash methods such as SHA1, SHA256, and MD5 are effective at uniquely identifying a single file. However, when representing a component as a collection of files, we can uniquely represent it by creating a meta-hash – i.e., by taking the hash of “all file hashes” of the files that comprise a component. That is, i) generate a hash for each file (e.g., using SHA256), ii) sort the list of hashes, and iii) take the hash of the sorted list. Thus, the meta-hash approach would enable us to identify a component based solely on its content uniquely, and no registry or repository of truth is required.  

Conclusion

Having access to software components and their respective metadata is mission-critical to executing various DevOps tasks. Therefore, it is vital to establish the right level of granularity to ensure we can capture all the required data. This challenge is further complicated by the need to handle an eclectic range of component types. Therefore, finding the right granular level of representation is critical. If it is too large, we will not represent all the component types and the meta-information needed to support the DevOps function. If it is too small, we could add unnecessary complexity, cost, and friction to adoption. We have determined that a file-level representation is optimal for representing the various component types, capturing all the necessary information, and providing an effective method to identify components uniquely.

References

[1] NTIA: Software Bill of Materials web page, https://www.ntia.gov/SBOM

[2] Busybox Project, https://busybox.net/

[3] Software Heritage archive: math.c, https://archive.softwareheritage.org/api/1/content/sha1:695d7abcac1da03e484bcb0defbee53d4652c347/raw/ 

[4] Wikipedia: Heartbleed, https://en.wikipedia.org/wiki/Heartbleed

[5] “AMNESIA:33: Researchers Disclose 33 Vulnerabilities Across Four Open Source TCP/IP Libraries”, https://www.tenable.com/blog/amnesia33-researchers-disclose-33-vulnerabilities-tcpip-libraries-uip-fnet-picotcp-nutnet

[6] Zephyr Security Update on Amnesia:33, https://www.zephyrproject.org/zephyr-security-update-on-amnesia33/

Linux Foundation Announces Software Bill of Materials (SBOM) Industry Standard, Research, Training, and Tools to Improve Cybersecurity Practices

The Linux Foundation responds to increasing demand for SBOMs that can improve supply chain security

SAN FRANCISCO, June 17, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced new industry research, training, and tools – backed by the SPDX industry standard – to accelerate the use of a Software Bill of Materials (SBOM) in secure software development.

The Linux Foundation is accelerating the adoption of SBOM practices to secure software supply chains with:

SBOM standard: stewarding SPDX, the de-facto standard for requirements and data sharingSBOM survey: highlighting the current state of industry practices to establish benchmarks and best practicesSBOM training: delivering a new course on Generating a Software Bill of Materials to accelerate adoptionSBOM tools:  enabling development teams to create SBOMs for their applications

“As the architects of today’s digital infrastructure, the open source community is in a position to advance the understanding and adoption of SBOMs across the public and private sectors,” said Mike Dolan, Senior Vice President and General Manager Linux Foundation Projects. “The rise in cybersecurity threats is driving a necessity that the open source community anticipated many years ago to standardize on how we share what is in our software. The time has never been more pressing to surface new data and offer additional resources that help increase understanding about how to adopt and generate SBOMs, and then act on the information.” 

Ninety percent (90%) of a modern application is assembled from open source software components. An SBOM accounts for the open source software components contained in an application that details their quality, license, and security attributes. SBOMs are used to ensure developers understand what components are flowing throughout their software supply chains, proactively identify issues and risks, and establish a starting point for their remediation.

The recent presidential Executive Order on Improving the Nation’s Cybersecurity referenced the importance of SBOMs in protecting and securing the software supply chain. The National Telecommunications and Information Administration (NTIA) followed the issuance of this order by asking for wide-ranging feedback to define a minimum SBOM. The Linux Foundation has responded to the NTIA’s SBOM inquiry here, and the presidential Executive Order here

SPDX: The De-Facto SBOM Open Industry Standard

SPDX – a Linux Foundation Project, is the de-facto open standard for communicating SBOM information, including open source software components, licenses, and known security vulnerabilities. SPDX evolved organically over the last ten years by collaborating with hundreds of companies, including the leading Software Composition Analysis (SCA) vendors – making it the most robust, mature, and adopted SBOM standard in the market. 

SBOM Readiness Survey

Linux Foundation Research is conducting the SBOM Readiness Survey. It will be deployed next week and will examine obstacles to adoption for SBOMs and future actions required to overcome them related to the security of software supply chains. The recent US Executive Order on Cybersecurity emphasizes SBOMs, and this survey will help identify industry gaps in SBOM applications. Survey questions address tooling, security measures, and industries leading in producing and consuming SBOMs, among other topics.

New Course: Generating a Software Bill of Materials

The Linux Foundation is also announcing a free, online training course, Generating a Software Bill of Materials (LFC192). This course provides foundational knowledge about the options and the tools available for generating SBOMs and how to use them to improve the ability to respond to cybersecurity needs. It is designed for directors, product managers, open source program office staff, security professionals, and developers in organizations building software. Participants will walk away with the ability to identify the minimum elements for an SBOM, how they can be assembled, and an understanding of some of the open source tooling available to support the generation and consumption of an SBOM. 

New Tools: SBOM Generator

Also announced today is the availability of the SPDX SBOM generator, which uses a command-line interface (CLI) to generate SBOM information, including components, licenses, copyrights, and security references of your application using SPDX v2.2 specification and aligning with the current known minimum elements from NTIA. Currently, the CLI supports GoMod (go), Cargo (Rust), Composer (PHP), DotNet (.NET), Maven (Java), NPM (Node.js), Yarn (Node.js), PIP (Python), Pipenv (Python), and Gems (Ruby). It is easily embeddable in automated processes such as continuous integration (CI) pipelines and is available for Windows, macOS, and Linux. 

Additional Resources

What is an SBOM?Build an SBOM training courseFree SBOM tool and APIs

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure, including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

Media Contacts

Jennifer Cloer

for Linux Foundation

jennifer@storychangesculture.com

503-867-2304

The post Linux Foundation Announces Software Bill of Materials (SBOM) Industry Standard, Research, Training, and Tools to Improve Cybersecurity Practices appeared first on Linux Foundation.

Free Training Course Explores Software Bill of Materials

At the most basic level, a Software Bill of Materials (SBOM) is a list of components contained in a piece of software. It can be used to support the systematic review and approval of each component’s license terms to clarify the obligations and restrictions as it applies to the distribution of the supplied software. This is important to reducing risk for organizations building software that uses open source components.

There is often confusion concerning the minimum data elements required for an SBOM and the reasoning behind why those elements are included. Understanding how components interact in a product is key for providing support for security processes, compliance processes, and other software supply chain use cases. 

This is why The Linux Foundation has taken the step of creating a free, online training course, Generating a Software Bill of Materials (LFC192). This course provides foundational knowledge about the options and the tools available for generating SBOMs and will help with understanding the benefits of adopting SBOMs and how to use them to improve the ability to respond to cybersecurity needs. It is designed for directors, product managers, open source program office staff, security professionals, and developers in organizations building software. Participants will walk away with the ability to identify the minimum elements for a SBOM, how they can be coded up, and an understanding of some of the open source tooling available to support the generation and consumption of an SBOM. 

The course takes around 90 minutes to complete. It features video content from Kate Stewart, VP, Dependable Embedded Systems at The Linux Foundation, who works with the safety, security, and license compliance communities to advance the adoption of best practices into embedded open source projects. A quiz is included to help confirm learnings.

Enroll today to start improving your development practices.

The post Free Training Course Explores Software Bill of Materials appeared first on Linux Foundation – Training.

Encrypting and decrypting archives with 7-Zip

You can compress and encrypt archives with 7-Zip with AES-256 encryption. Find out how.
Read More at Enable Sysadmin

What is an SBOM?

The National Telecommunications and Information Administration (NTIA) recently asked for wide-ranging feedback to define a minimum Software Bill of Materials (SBOM). It was framed with a single, simple question (“What is an SBOM?”), and constituted an incredibly important step towards software security and a significant moment for open standards.

From NTIA’s SBOM FAQ  “A Software Bill of Materials (SBOM) is a complete, formally structured list of components, libraries, and modules that are required to build (i.e. compile and link) a given piece of software and the supply chain relationships between them. These components can be open source or proprietary, free or paid, and widely available or restricted access.”  SBOMs that can be shared without friction between teams and companies are a core part of software management for critical industries and digital infrastructure in the coming decades.

The ISO International Standard for open source license compliance (ISO/IEC 5230:2020 – Information technology — OpenChain Specification) requires a process for managing a bill of materials for supplied software. This aligns with the NTIA goals for increased software transparency and illustrates how the global industry is addressing challenges in this space. For example, it has become a best practice to include an SBOM for all components in supplied software, rather than isolating these materials to open source.

The open source community identified the need for and began to address the challenge of SBOM “list of ingredients” over a decade ago. The de-facto industry standard, and most widely used approach today, is called Software Package Data Exchange (SPDX). All of the elements in the NTIA proposed minimum SBOM definition can be addressed by SPDX today, as well as broader use-cases.

SPDX evolved organically over the last decade to suit the software industry, covering issues like license compliance, security, and more. The community consists of hundreds of people from hundreds of companies, and the standard itself is the most robust, mature, and adopted SBOM in the market today. 

The full SPDX specification is only one part of the picture. Optional components such as SPDX Lite, developed by Pioneer, Sony, Hitachi, Renesas, and Fujitsu, among others, provide a focused SBOM subset for smaller supplier use. The nature of the community approach behind SPDX allows practical use-cases to be addressed as they arose.

In 2020, SPDX was submitted to ISO via the PAS Transposition process of Joint Technical Committee 1 (JTC1) in collaboration with the Joint Development Foundation. It is currently in the approval phase of the transposition process and can be reviewed on the ISO website as ISO/IEC PRF 5962.

The Linux Foundation has prepared a submission for NTIA highlighting knowledge and experience gained from practical deployment and usage of SBOM in the SPDX and OpenChain communities. These include isolating the utility of specific actions such as tracking timestamps and including data licenses in metadata. With the backing of many parties across the worldwide technology industry, the SPDX and OpenChain specifications are constantly evolving to support all stakeholders.

Industry Comments

The Sony team uses various approaches to managing open source compliance and governance… An example is using an OSS management template sheet based on SPDX Lite, a compact subset of the SPDX standard. Teams need to be able to review the type, version, and requirements of software quickly, and using a clear standard is a key part of this process.

Hisashi Tamai, SVP, Sony Group Corporation, Representative of the Software Strategy Committee

“Intel has been an early participant in the development of the SPDX specification and utilizes SPDX, as well as other approaches, both internally and externally for a number of open source software use-cases.”

Melissa Evers, Vice President – Intel Architecture, Graphics, Software / General Manager – Software Business Strategy

Scania corporate standard 4589 (STD 4589) was just made available to our suppliers and defines the expectations we have when Open Source is part of a delivery to Scania. So what is it we ask for in a relationship with our suppliers when it comes to Open Source? 

1) That suppliers conform to ISO/IEC 5230:2020 (OpenChain). If a supplier conforms to this specification, we feel confident that they have a professional management program for Open Source.  

2) If in the process of developing a solution for Scania, a supplier makes modifications to Open Source components, we would like to see those modifications contributed to the Open Source project. 

3) Supply a Bill of materials in ISO/IEC DIS 5962 (SPDX) format, plus the source code where there’s an obligation to offer the source code directly, so we don’t need to ask for it.

Jonas Öberg, Open Source Officer – Scania (Volkswagen Group)

The SPDX format greatly facilitates the sharing of software component data across the supply chain. Wind River has provided a Software Bill of Materials (SBOM) to its customers using the SPDX format for the past eight years. Often customers will request SBOM data in a custom format. Standardizing on SPDX has enabled us to deliver a higher quality SBOM at a lower cost.

Mark Gisi, Wind River Open Source Program Office Director and OpenChain Specification Chair

The Black Duck team from Synopsys has been involved with SPDX since its inception, and I had the pleasure of coordinating the activities of the project’s leadership for more than a decade. In addition, representatives from scores of companies have contributed to the important work of developing a standard way of describing and communicating the content of a software package.

Phil Odence, General Manager, Black Duck Audits, Synopsys

With the rapidly increasing interest in the types of supply chain risk that a Software Bill of Materials helps address, SPDX is gaining broader attention and urgency. FossID (now part of Snyk) has been using SPDX from the start as part of both software component analysis and for open source license audits. Snyk is stepping up its involvement too, already contributing to efforts to expand the use cases for SPDX by building tools to test out the draft work on vulnerability profiles in SPDX v3.0.

Gareth Rushgrove, Vice President of Products, Snyk

For more information on OpenChain: https://www.openchainproject.org/

For more information on SPDX: https://spdx.dev/

References:

The post What is an SBOM? appeared first on Linux Foundation.

Use automation to combat your increased workload

Tired of mundane, tedious, boring tasks? Automation improves your efficiency and frees your time to focus on new and innovative opportunities.
Read More at Enable Sysadmin

When will my instance be ready? — understanding cloud launch time performance metrics

A breakdown of ways to measure and monitor instance launch time performance metrics.

Click to Read More at Oracle Linux Kernel Development

When will my instance be ready? — understanding cloud launch time performance metrics

Understanding cloud launch time performance metrics
Click to Read More at Oracle Linux Kernel Development