Home Blog Page 142

Instructor-Led Kubernetes Security Fundamentals Course Now Available

Kubernetes Security Fundamentals (LFS460) is the newest instructor-led training course from Linux Foundation Training & Certification and the Cloud Native Computing Foundation. Those taking the course will gain the skills and knowledge on a broad range of best practices for securing their clouds, container-based applications and Kubernetes platforms during build, deployment and runtime, and upon completion will be ready to take the Certified Kubernetes Security Specialist (CKS) certification exam. CKS registration is included for those taking the instructor-led course, though only those who already hold a Certified Kubernetes Administrator (CKA) are permitted to sit for the exam. 

This four-day course is taught by a live, expert instructor from The Linux Foundation. Anyone may enroll in a public course – the first of which is being offered March 29-April 2, 2021 – or organizations that wish to train a team may arrange for a private course by contacting our Corporate Solutions team. Public courses are conducted online, with a live industry expert providing content and taking you through hands-on labs to teach the experience you need to secure container-based applications. The course covers more than just container security, exploring topics from before a cluster has been configured through deployment, and ongoing and agile use, including where to find ongoing security and vulnerability information. 

The course covers similar content to the Kubernetes Security Essentials (LFS260) eLearning course, but with the added benefit of a live instructor. Before enrolling, course participants are strongly encouraged to have taken or possess the requisite knowledge covered in the CKA exam. Familiarity with the skills and knowledge covered in that exam and related Kubernetes Administration (LFS458) training are necessary to be successful in the new Kubernetes Security Fundamentals course.

Enroll today and get your team ready to address any potential cloud security issues.

The post Instructor-Led Kubernetes Security Fundamentals Course Now Available appeared first on Linux Foundation – Training.

From Docker Compose to Kubernetes with Podman

From Docker Compose to Kubernetes with Podman

Use Podman 3.0 to convert Docker Compose YAML to a format Podman recognizes.
Brent Baude
Thu, 1/14/2021 at 1:40pm

Image

Photo by Pok Rie from Pexels

The Docker Compose tool has been valuable for many people who have been working with containers. According to the documentation, Docker Compose describes itself as:

… a tool for defining and running multi-container applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Topics:  
Linux  
Containers  
Kubernetes  
Read More at Enable Sysadmin

Start 2021 Off With a New Career in the Cloud! Cloud Engineering Bootcamps are on Sale

The 2020 Open Source Jobs Report found that cloud skills are the most in demand by hiring managers, with 70% reporting they are more likely to hire someone with a solid foundation in cloud and container technologies. Additionally, a D2iQ study found that “only 23% of organizations believe they have the talent required to successfully complete their cloud native journey”. If you’re looking to move to a new career this year, gaining cloud skills and knowledge is the place to start, and now is the time to make it happen.

Last summer, The Linux Foundation and Cloud Native Computing Foundation launched our first ever bootcamp programs to help individuals become trained and certified in cloud technologies in a structured, supported way. The Cloud Engineer Bootcamp and Advanced Cloud Engineer Bootcamp contain the training courses and exams to get you prepared with the knowledge and skills to succeed in a career as a cloud administrator or engineer in as little as six months, and the verifiable, industry-leading certifications to demonstrate those skills. 

The Cloud Engineer Bootcamp is designed for relative newbies, who want to start an IT career with little or no prior experience. As cloud technologies are underpinned by Linux, the program starts with two courses giving you the Linux skills you need to get started, and the Linux Foundation Certified System Administrator (LFCS) exam to help you prove it. The program then continues with three courses focused on cloud and container technologies, finishing with the highly-sought Certified Kubernetes Administrator (CKA) exam.

The Advanced Cloud Engineer Bootcamp assumes you already possess the requisite Linux knowledge to make a start with learning about cloud technologies, so jumps right in with two cloud and containers courses and the CKA exam. This bootcamp then moves into more advanced cloud concepts and technologies, including service mesh, monitoring, logging and application management, in four additional courses. 

Both bootcamps include office hours with live instructors daily via Zoom, giving you the opportunity to ask questions and get help understanding the course material. There are also bootcamp forums, providing the chance to interact with fellow enrollees and discuss lessons and topics. Upon completion, you will receive a verifiable, digital badge for completing the bootcamp, as well as badges for passing each certification exam.

Through January 19, both bootcamps are reduced in price from their usual price of $999 (already a substantial discount from the $2,300 list price of the bootcamp components) to only $599. Those wishing to take both bootcamps can choose that option for only $899. Take advantage of this limited time offer to propel yourself into a new, highly lucrative career in 2021!

The post Start 2021 Off With a New Career in the Cloud! Cloud Engineering Bootcamps are on Sale appeared first on Linux Foundation – Training.

Preventing Supply Chain Attacks like SolarWinds

In late 2020, it was revealed that the SolarWinds Orion software, which is in use by numerous US Government agencies and many private organizations, was severely compromised. This was an incredibly dangerous set of supply chain compromises that the information technology community (including the Open Source community) needs to learn from and take action on.

The US Cybersecurity and Infrastructure Security Agency (CISA) released an alert noting that the SolarWinds Orion software included malicious functionality in March 2020, but it was not detected until December 2020. CISA’s Emergency Directive 21-01 stated that it was being exploited, had a high potential of compromise, and a grave impact on entire organizations when compromised. Indeed, because Orion deployments typically control networks of whole organizations, this is a grave problem. The more people look, the worse it gets. As I write this, it appears that a second and third malware have been identified in Orion.

Why the SolarWinds Attack Is Particularly Noteworthy

What’s especially noteworthy is how the malicious code was inserted into Orion: the attackers subverted something called the build environment. When software is being developed it is converted (compiled) from source code (the text that software developers update) into an executable package using a “build process.” For example, the source code of many open source software projects is then used in software that is built, compiled, and redistributed by other organizations, so that it is ready to install and run on various computing platforms. In the case of SolarWinds’ Orion, CrowdStrike found a piece of malware called Sunspot that watched the build server for build commands and silently replaced source code files inside the Orion app with files that loaded the Sunburst malware. The SolarWinds Orion compromise by Sunspot isn’t the first example of these kinds of attacks, but it has demonstrated just how dangerous they can be when they compromise widely-used software.

Unfortunately, a lot of conventional security advice cannot counter this kind of attack: 

SolarWinds’ Orion is not open source software. Only the company’s developers can legally review, modify, or redistribute its source code or its build system and configurations. If we needed further evidence that obscurity of software source code doesn’t automatically provide security, this is it.

Recommendations from The Linux Foundation 

Organizations need to harden their build environments against attackers. SolarWinds followed some poor practices, such as using the insecure ftp protocol and publicly revealing passwords, which may have made these attacks especially easy. The build system is a critical production system, and it should be treated like one, with the same or higher security requirements as its production environments. This is an important short-term step that organizations should already be doing. However, it’s not clear that these particular weaknesses were exploited or that such hardening would have made any difference. Assuming a system can “never be broken into” is a failing strategy.

In the longer term, I know of only one strong countermeasure for this kind of attack: verified reproducible builds. A “reproducible build” is a build that always produces the same outputs given the same inputs so that the build results can be verified. A verified reproducible build is a process where independent organizations produce a build from source code and verify that the built results come from the claimed source code. Almost all software today is not reproducible, but there’s work to change this. The Linux Foundation and Civil Infrastructure Platform has been funding work, including the Reproducible Builds project, to make it possible to have verified reproducible builds.

The software industry needs to begin shifting towards implementing and requiring verified reproducible builds. This will not be easy. Most software is not designed to be reproducible in their build environments today, so it may take years to make software reproducible. Many changes must be made to make software reproducible, so resources (time and money) are often needed. And there’s a lot of software that needs to be reproducible, including operating system packages and library level packages. There are package distribution systems that would need to be reviewed and likely modified. I would expect some of the most critical software to become reproducible first, and then less critical software would increase over time as pressure increases to make more software verified reproducible. It would be wise to develop widely-applicable standards and best practices for creating reproducible builds. Once software is reproducible, others will need to verify the build results for given source code to counter these kinds of attacks. Reproducible builds are much easier for open source software (OSS) because there’s no legal impediment to having many verifiers. Closed source software developers will have added challenges; their business models often depend on hiding source code. It’s still possible to have “trusted rebuilders” worldwide to verify closed source software, even though it’s more challenging and the number of rebuilders would necessarily be smaller.

The information technology industry is generally moving away from “black boxes” that cannot be inspected and verified and towards components that can be reviewed. So this is part of a general industry trend; it’s a trend that needs to be accelerated.

This is not unprecedented. Auditors have access to the financial data and review the financial systems of most enterprises. Audits are an independent entity verifying the data and systems for the benefit of the ecosystem. There is a similar opportunity for organizations to become independent verifiers for both open source and closed source software and build systems. 

Attackers will always take the easiest path, so we can’t ignore other attacks. Today most attacks exploit unintentional vulnerabilities in code, so we need to continue to work to prevent these unintentional vulnerabilities. These mitigations include changing tools & interfaces so those problems won’t happen, educating developers on developing secure software (such as the free courses from OpenSSF on edX), and detecting residual vulnerabilities before deployment through various detection tools. The Open Source Security Foundation (OpenSSF) is working on improving the security of open source software (OSS), including all these points.

Applications are mostly reused software (with a small amount of custom code), so this reused software’s software supply chain is critical. Reused components are often extremely out-of-date. Thus, they have many publicly-known unintentional vulnerabilities; in fact, reused components with known vulnerabilities are among the topmost common problems in web applications. The LF’s LFX security tools, GitHub’s Dependabot, GitLab’s dependency analyzers, and many other tools & services can help detect reused components with known vulnerabilities.

Vulnerabilities in widely-reused OSS can cause widespread problems, so the LF is already working to identify such OSS so that it can be reviewed and hardened further (see Vulnerabilities in the Core Preliminary Report and Census II of Open Source Software).

The supply chain matters for malicious code, too; most malicious code gets into applications through library “typosquatting” (that is, by creating a malicious library with a name that looks like a legitimate library). 

That means that Users need to start asking for a software bill of materials (SBOM) so they will know what they are using. The US National Telecommunications and Information Administration (NTIA) has been encouraging the adoption of SBOMs throughout organizations and the software supply chain process. The Linux Foundation’s Software Package Data Exchange (SPDX) format is a SBOM format by many. Once you get SBOM information, examine the versions that are included. If the software has malicious components, or components with known vulnerabilities, start asking why. Some vulnerabilities may not be exploitable, but too many application developers simply don’t update dependencies even when they are exploitable. To be fair, there’s a chicken-and-egg problem here: specifications are in the process of being updated, tools are in development, and many software producers aren’t ready to provide SBOMs.  So users should not expect that most software producers will have SBOMs ready today. However, they do need to create a demand for SBOMs.

Similarly, software producers should work towards providing SBOM information. For many OSS projects this can typically be done, at least in part, by providing package management information that identifies their direct and indirect dependencies (e.g., in package.json, requirements.txt, Gemfile, Gemfile.lock, and similar files). Many tools can combine this information to create more complete SBOM information for larger systems.

Organizations should invest in OpenChain conformance and require their suppliers to implement a process designed to improve trust in a supply chain.  OpenChain’s conformance process reveals specifics about the components you depend on that are a critical first step to countering many supply chain attacks.

Conclusion

The attack on SolarWinds’ Orion will have devastating effects for years to come. But we can and should learn from it. 

We can:

  1. Harden software build environments
  2. Move towards verified reproducible builds 
  3. Change tools & interfaces so unintentional vulnerabilities are less likely
  4. Educate developers (such as the free courses from OpenSSF on edX)
  5. Use vulnerability detection tools when developing software
  6. Use tools to detect known-vulnerable components when developing software
  7. Improve widely-used OSS (the OpenSSF is working on this)
  8. Ask for a software bill of materials (SBOMs), e.g., in SPDX format. Many software producers aren’t ready to provide one yet, but creating the demand will speed progress
  9. Determine if subcomponents we use have known vulnerabilities 
  10. Work towards providing SBOM information if we produce software for others
  11. Implement OpenChain 

Let’s make it much harder to exploit the future systems we all depend on. Those who do not learn from history are often doomed to repeat it.

David A. Wheeler, Director of Open Source Supply Chain Security at the Linux Foundation

The post Preventing Supply Chain Attacks like SolarWinds appeared first on Linux Foundation.

Open Source Management & Strategy Training Program Launched by The Linux Foundation

Program consists of seven modular courses, and can be tailored to suit the needs of different audiences within an organization

 SAN FRANCISCO, January 12, 2021The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the availability of a new training program designed to introduce open source best practices to management and technical staff within organizations, Open Source Management & Strategy.

This 7-module course series is designed to help executives, managers, software developers and engineers understand and articulate the basic concepts for building effective open source practices within their organization. It is also helpful for a leadership audience responsible for setting up effective program management of open source in their organization, including explaining how to create an Open Source Program Office (OSPO). 

The program builds on the accumulated wisdom of many previous training modules on open source best practices, while adding fresh and updated content to explain all of the critical elements of working effectively with open source in enterprises. The courses are designed to be self-paced, and reasonably high-level, but with enough detail to get new open source practitioners up and running quickly.

The courses in the program are designed to be modular, so participants only need to take those of relevance to them. The courses included are:

  • LFC202 – Open Source Introduction – covers the basic components of open source and open standards
  • LFC203 – Open Source Business Strategy – discusses the various open source business models and how to develop practical strategies and policies for each
  • LFC204 – Effective Open Source Program Management – explains how to build an effective OSPO and the different types of roles and responsibilities needed to run it successfully
  • LFC205 – Open Source Development Practices – talks about the role of continuous integration and testing in a healthy open source project
  • LFC206 – Open Source Compliance Programs – covers the importance of effective open source license compliance and how to build programs and processes to ensure safe and effective consumption of open source
  • LFC207 – Collaborating Effectively with Open Source Projects – discusses how to work effectively with upstream open source projects and how to get the maximum benefit from working with project communities
  • LFC208 – Creating Open Source Projects – explains the rationale and value for creating new open source projects as well as the required legal, business and development processes needed to launch new projects

The courses were developed by Guy Martin, Executive Director of OASIS Open, an internationally recognized standards development and open source projects consortium.

Guy has a unique blend of 25+ years’ experience as both software engineer and open source strategist. He has built open source programs for companies like Red Hat, Samsung and Autodesk and was instrumental in founding the Academy Software Foundation while Director of the Open Source Office at Autodesk. He was also a founding member of the team that built the Open Connectivity Foundation while at Samsung, and has contributed to several best practices and learning guides from the Linux Foundation’s TODO Group, a resource for OSPO personnel.

“Open source is not only commonplace in enterprises today, but actually is impossible to avoid as much modern technology including the cloud and networking systems are based on it,” said Chris Aniszczyk, co-founder of the TODO Group and VP of Developer Relations at The Linux Foundation. “This means organizations must prepare their teams to use it properly, ensuring compliance with licensing requirements, how to implement continuous delivery and integration, processes for working with and contributing to the open source community, and related topics. This program provides a structured way to do that which benefits everyone from executive management to software developers.”

The Open Source Management & Strategy program is available to begin immediately. The $499 enrollment fee provides unlimited access to all seven courses for one year, as well as a certificate upon completion. Interested individuals may enroll here. The program is also included in all corporate training subscriptions.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

# # #

The post Open Source Management & Strategy Training Program Launched by The Linux Foundation appeared first on Linux Foundation – Training.

When sysadmins collaborate: Attending and organizing a local community meetup

When sysadmins collaborate: Attending and organizing a local community meetup

Practical tips for establishing your own in-person and virtual meetups and getting the most from events that you attend.
Joseph Tejal
Tue, 1/12/2021 at 2:17pm

Image

Photo by Dani Hart from Pexels

Innovation requires collaboration—and collaboration springs from sharing, whether it’s simple Enable Sysadmin articles like this or through interactions in local meetups where we get a chance to connect and meet fellow sysadmins and SMEs to exchange insights, ideas and learn from each other. Through these exchanges, you will realize that you’re not alone—some of your challenges are common across organizations, that most of the solutions are already out there waiting for you, and that you don’t have to reinvent the wheel.

Topics:  
Linux  
Career  
Read More at Enable Sysadmin

How to set up SSH dynamic port forwarding on Linux

How to set up SSH dynamic port forwarding on Linux

Dynamic port forwarding allows for a great deal of flexibility and secure remote connections. See how to configure and use this SSH feature.
Juerg Ritter
Mon, 1/11/2021 at 11:28pm

Image

Photo by Christina Morillo from Pexels

Many enterprises use Secure Shell (SSH) accessible jump servers to access business-critical systems. Administrators first connect to a jump server using SSH, possibly through a VPN, before connecting to the target system. This method usually works great as long as an administrator sticks with command-line administration. It gets a bit more tricky when an administrator wants to break out of the command-line realm and use a web-based interface instead.

Topics:  
Linux  
Linux Administration  
Security  
Read More at Enable Sysadmin

A Zoological guide to kernel data structures

Kernel data structures exist in many shapes and sizes, in this blog Oracle Linux kernel engineer Alan Maguire performs a statistical analysis on data structure sizes in the Linux kernel.Recently I was working on a BPF feature which aimed to provide a mechanism to display any kernel data structure for debugging purposes. As part of that effort, I wondered what the limits are. How…

Click to Read More at Oracle Linux Kernel Development

A Zoological guide to kernel data structures

Kernel data structures exist in many shapes and sizes, in this blog Oracle Linux kernel engineer Alan Maguire performs a statistical analysis using pahole (poke-a-hole) and gnuplot to answer the questions:

How many data structures are there, and what patterns can be observed between kernel versions? What are the smallest and largest data structures and why?
What is the overall pattern of structure sizes for a given kernel release? And how does this change between releases?
Click to Read More at Oracle Linux Kernel Development