Home Blog Page 83

Classic SysAdmin: How to Kill a Process from the Linux Command Line

This is a classic article from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course and our Essentials of System Administration eLearning

Picture this: You’ve launched an application (be it from your favorite desktop menu or from the command line) and you start using that launched app, only to have it lock up on you, stop performing, or unexpectedly die. You try to run the app again, but it turns out the original never truly shut down completely.

What do you do? You kill the process. But how? Believe it or not, your best bet most often lies within the command line. Thankfully, Linux has every tool necessary to empower you, the user, to kill an errant process. However, before you immediately launch that command to kill the process, you first have to know what the process is. How do you take care of this layered task? It’s actually quite simple…once you know the tools at your disposal.

Let me introduce you to said tools.

The steps I’m going to outline will work on almost every Linux distribution, whether it is a desktop or a server. I will be dealing strictly with the command line, so open up your terminal and prepare to type.

Locating the process

The first step in killing the unresponsive process is locating it. There are two commands I use to locate a process: top and ps. Top is a tool every administrator should get to know. With top, you get a full listing of currently running process. From the command line, issue top to see a list of your running processes (Figure 1).

Figure 1: The top command gives you plenty of information.

From this list you will see some rather important information. Say, for example, Chrome has become unresponsive. According to our top display, we can discern there are four instances of chrome running with Process IDs (PID) 3827, 3919, 10764, and 11679. This information will be important to have with one particular method of killing the process.

Although top is incredibly handy, it’s not always the most efficient means of getting the information you need. Let’s say you know the Chrome process is what you need to kill, and you don’t want to have to glance through the real-time information offered by top. For that, you can make use of the ps command and filter the output through grep. The ps command reports a snapshot of a current process and grep prints lines matching a pattern. The reason why we filter ps through grep is simple: If you issue the ps command by itself, you will get a snapshot listing of all current processes. We only want the listing associated with Chrome. So this command would look like:

ps aux | grep chrome

The aux options are as follows:

a = show processes for all users

u = display the process’s user/owner

x = also show processes not attached to a terminal

The x option is important when you’re hunting for information regarding a graphical application.

When you issue the command above, you’ll be given more information than you need (Figure 2) for the killing of a process, but it is sometimes more efficient than using top.

Figure 2: Locating the necessary information with the ps command.

Killing the process

Now we come to the task of killing the process. We have two pieces of information that will help us kill the errant process:

Process name Process ID

Which you use will determine the command used for termination. There are two commands used to kill a process:

kill – Kill a process by ID killall – Kill a process by name

There are also different signals that can be sent to both kill commands. What signal you send will be determined by what results you want from the kill command. For instance, you can send the HUP (hang up) signal to the kill command, which will effectively restart the process. This is always a wise choice when you need the process to immediately restart (such as in the case of a daemon). You can get a list of all the signals that can be sent to the kill command by issuing kill -l. You’ll find quite a large number of signals (Figure 3).

Figure 3: The available kill signals.

The most common kill signals are:

Signal Name

Single Value

Effect

SIGHUP

1

Hangup

SIGINT

2

Interrupt from keyboard

SIGKILL

9

Kill signal

SIGTERM

15

Termination signal

SIGSTOP

17, 19, 23

Stop the process

What’s nice about this is that you can use the Signal Value in place of the Signal Name. So you don’t have to memorize all of the names of the various signals.
So, let’s now use the kill command to kill our instance of chrome. The structure for this command would be:

kill SIGNAL PID

Where SIGNAL is the signal to be sent and PID is the Process ID to be killed. We already know, from our ps command that the IDs we want to kill are 3827, 3919, 10764, and 11679. So to send the kill signal, we’d issue the commands:

kill -9 3827

kill -9 3919

kill -9 10764

kill -9 11679

Once we’ve issued the above commands, all of the chrome processes will have been successfully killed.

Let’s take the easy route! If we already know the process we want to kill is named chrome, we can make use of the killall command and send the same signal the process like so:

killall -9 chrome

The only caveat to the above command is that it may not catch all of the running chrome processes. If, after running the above command, you issue the ps aux|grep chrome command and see remaining processes running, your best bet is to go back to the kill command and send signal 9 to terminate the process by PID.

Ending processes made easy

As you can see, killing errant processes isn’t nearly as challenging as you might have thought. When I wind up with a stubborn process, I tend to start off with the killall command as it is the most efficient route to termination. However, when you wind up with a really feisty process, the kill command is the way to go.

The post Classic SysAdmin: How to Kill a Process from the Linux Command Line appeared first on Linux Foundation.

8 fundamental Linux file-management commands for new users

Learn how to create, copy, move, rename, and delete files and directories from the Linux command line.

Read More at Enable Sysadmin

8 essential Linux file navigation commands for new users

It’s straightforward to get around your Linux system if you know these basic commands.

Read More at Enable Sysadmin

5 scripts for getting started with the Nmap Scripting Engine

The NSE boosts Nmap’s power by adding scripting capabilities (custom or community-created) to the network scanning tool.

Read More at Enable Sysadmin

3 tools for troubleshooting packet filtering

Use Nmap, Wireshark, and tcpdump to sniff out router problems on your network.

Read More at Enable Sysadmin

Open Source Foundations Must Work Together to Prevent the Next Log4Shell Scramble

Brian Behlendorf

As someone who has spent their entire career in open source software (OSS), the Log4Shell scramble (an industry-wide four-alarm-fire to address a serious vulnerability in the Apache Log4j package) is a humbling reminder of just how far we still have to go. OSS is now central to the functioning of modern society, as critical as highway bridges, bank payment platforms, and cell phone networks, and it’s time OSS foundations started to act like it.

Organizations like the Apache Software Foundation, the Linux Foundation, the Python Foundation, and many more, provide legal, infrastructural, marketing and other services for their communities of OSS developers. In many cases the security efforts at these organizations are under-resourced and hamstrung in their ability to set standards and requirements that would mitigate the chances of major vulnerabilities, for fear of scaring off new contributors. Too many organizations have failed to apply raised funds or set process standards to improve their security practices, and have unwisely tilted in favor of quantity over quality of code.

What would “acting like it” look like? Here are a few things that OSS foundations can do to mitigate security risks:

Set up an organization-wide security team to receive and triage vulnerability reports, as well as coordinate responses and disclosures to other affected projects and organizations.Perform frequent security scans, through CI tooling, for detecting unknown vulnerabilities in the software and recognizing known vulnerabilities in dependencies.Perform occasional outside security audits of critical code, particularly before new major releases.Require projects to use test frameworks, and ensure high code coverage, so that features without tests are discouraged and underused features are weeded out proactively.Require projects to remove deprecated or vulnerable dependencies. (Some Apache projects are not vulnerable to the Log4j v2 CVE, because they are still shipping with Log4j v1, which has known weaknesses and has not received an update since 2015!)Encourage, and then eventually require, the use of SBOM formats like SPDX to help everyone track dependencies more easily and quickly, so that vulnerabilities are easier to find and fix.Encourage, and then eventually require, maintainers to demonstrate familiarity with the basics of secure software development practices.

Many of these are incorporated into the CII Best Practices badge, one of the first attempts to codify these into an objective comparable metric, and an effort that has now moved to OpenSSF. The OpenSSF has also published a free course for developers on how to develop secure software, and SPDX has recently been published as an ISO standard.

None of the above practices is about paying developers more, or channeling funds directly from users of software to developers. Don’t get me wrong, open source developers and the people who support them should be paid more and appreciated more in general. However, it would be an insult to most maintainers to suggest that if you’d just slipped more money into their pockets they would have written more secure code. At the same time, it’s fair to say a tragedy-of-the-commons hits when every downstream user assumes that these practices are in place, being done and paid for by someone else.

Applying these security practices and providing the resources required to address them is what foundations are increasingly expected to do for their community. Foundations should begin to establish security-related requirements for their hosted and mature projects. They should fundraise from stakeholders the resources required for regular paid audits for their most critical projects, scanning tools and CI for all their projects, and have at least a few paid staff members on a cross-project security team so that time-critical responses aren’t left to individual volunteers. In the long term, foundations should consider providing resources to move critical projects or segments of code to memory-safe languages, or fund bounties for more tests.

The Apache Software Foundation seems to have much of this right, let’s be clear. Despite being notified just before the Thanksgiving holiday, their volunteer security team worked with the Log4j maintainers and responded quickly. Log4j also has almost 8000 passing tests in its CI pipeline, but even all that testing didn’t catch the way this vulnerability could be exploited. And in general, Apache projects are not required to have test coverage at all, let alone run the kind of SAST security scans or host third party audits that might have caught this.

Many other foundations, including those hosted at the Linux Foundation, also struggle to do all this – this is not easy to push through the laissez-faire philosophy that many foundations have regarding code quality, and third-party code audits and tests don’t come cheap. But for the sake of sustainability, reducing the impact on the broader community, and being more resilient, we have got to do better. And we’ve got to do this together, as a crisis of confidence in OSS affects us all.

This is where OpenSSF comes in, and what pulled me to the project in the first place. In the new year you’ll see us announce a set of new initiatives that build on the work we’ve been doing to “raise the floor” for security in the open source community. The only way we do this effectively is to develop tools, guidance, and standards that make adoption by the open source community encouraged and practical rather than burdensome or bureaucratic. We will be working with and making grants to other open source projects and foundations to help them improve their security game. If you want to stay close to what we’re doing, follow us on Twitter or get involved in other ways. For a taste of where we’ve been to date, read our segment in the Linux Foundation Annual Report, or watch our most recent Town Hall.

Hoping for a 2022 with fewer four alarm fires,

Brian

Brian Behlendorf is General Manager of the Linux Foundation’s Open Source Security Foundation (OpenSSF). He was a founding member of the Apache Group, which later became the Apache Software Foundation, and served as president of the foundation for three years.

The post Open Source Foundations Must Work Together to Prevent the Next Log4Shell Scramble appeared first on Linux Foundation.

OSPOlogy: Learnings from OSPOs in 2021

A wide range of open source topics essential for OSPO related activities occurred in 2021, featured by OS experts coming from matured OSPOs like Bloomberg or RIT and communities behind open source standards like OpenChain or CHAOSS.

The TODO Group has been paving the OSPO path over a decade of change and is now composed of a worldwide community of open source professionals working in collaboration to drive Open Source Initiatives to the next level. 

The TODO Group Member Landscape

One of the many initiatives that the TODO Group has been working on since last August has been OSPOLogy. With OSPOLogy, the TODO Group aims to ease the access to more organizations across sectors to understand and adopt OSPOs by open and transparent networking: engaging with open source leaders through real-time conversations. 

“In OSPOLogy, we have have the participation of experienced OSPO leaders like Bloomberg, Microsoft or SAP, widely adopted project/Initiatives such as OpenChain, CHAOSS or SPDX, and industry open source specialists like LF Energy or FINOS. There is a huge diversity of folks in the open source ecosystem that help people and organizations to improve their Open Source Programs, their OSPO management skills, or advance in their OSPO careers. Thus, after listening to the community demands, we decided to offer a space with dedicated resources to make these connections happen, under an open governance model designed to encourage other organizations and communities to contribute.”

AJ – OSPO Program Manager at TODO Group

What has OSPOlogy accomplished so far?

Within OSPOlogy 2021 series, we had insightful discussions coming from five different OSPO topics:

[August 4, 2021] How to start an OSPO with Bloomberg[September 1, 2021] Mentoring and Talent Management within OS Ecosystems with US Bank[October 13, 2021] The State of OSPOs in 2021 with LF[November 17, 2021] Academic OSPOs with CHAOSS and RIT [December 1, 2021] Governance in the Context of Compliance and Security with OpenChain

For more information, please watch the video replays on our OSPOlogy YouTube channel here

The format is pretty simple: OSPOlogy kicks off the meetings with the OSPO news happening worldwide during that month and moves to the topic of the day where featured guests introduce a topic relevant to OSPO and ways to set up open source initiatives. These two sections are recorded and published within the LF Community platform and the new OSPOlogy youtube channel.

Once the presentation finishes, we stop the recording and move to real-time conversations and Q&A section under Chatham house rules in order to keep a safe environment for the community to freely share their opinions and issues.

“One of the biggest challenges when preparing the 2021 agenda was to get used to the new platform used to host these meetings and find contributors to kick off the initiative. We keep improving the quality and experience of these meetings every month and thanks to the feedback received by the community, building new stuff for 2022”

AJ – OSPO Program Manager at TODO Group

TODO Mission: build the next OSPOlogy 2022 series together

The TODO Group gives big importance to neutrality. That’s why this project (same as the other TODO projects) is under an open governance model, to allow people from other organizations and peers across sectors to freely contribute and grow this initiative together.

OSPOlogy  has a planning doc, governance guidelines, and a topic pool agenda to:

Propose new topicsOffer to be a moderatorBecome speaker

https://github.com/todogroup/ospology/tree/main/meetings.

“During the past months, we have been reaching out to other communities like FINOS, LF Energy, OpenChain, SPDX, or CHAOSS. These projects have become of vital importance to many OSPO activities (either for specific activities, such as managing Open Source Compliance & ISO Standards, measuring the impact of relevant open source projects or helping to overcome entry barriers for more traditional sectors, like finance or energy industry)” 

OSPOlogy, along with the TODO Associates program, aims to bring together all these projects to introduce them to the OSPO community and drive insightful discussions. These are some of the topics proposed by the community for 2022:

How to start an OSPOs within the Energy sectorHow to start an OSPOs within the Finance sectorMeasuring the impact of the open source projects that matters to your organizationOpen Source Compliance best practices in the lens of an OSPO

OSPOlogy is not just limited to LF projects and the TODO Community. Outside initiatives, foundations, or vendors that work closely with OSPOs and help the OSPO movement are also welcome to join.

We have just created a CFP form so people can easily add their OSPO topics for upcoming OSPOlogy sessions:

https://github.com/todogroup/ospology/blob/main/.github/ISSUE_TEMPLATE/call-for-papers.yml

In order to propose a topic, interested folks just need to open an issue using the call for papers GitHub form.

The TODO Group’s journey: Paving the OSPO path over a decade of change

Significant advancements and community shifts have occurred since (the year when TODO Group was formed) in the open source ecosystem and the way organizations advance in their open source journey. By that time, most of the OSPOs were gathered in the bay area and led by software companies, requesting to share limited information due to the uncertainty across this industry. 

OSPO Maturity Levels

However, this early version of TODO is far behind what it  (and OSPOs) represent in the present day.

With digital transformation forcing all organizations to be open source forward and OSPOs adopted by multiple sectors, the TODO Group is composed of a worldwide community of open source professionals working in collaboration to drive Open Source Initiatives to the next level.

It is well known that the TODO group members are also OSPO mentors and advocates who have been working in the open source industry for years.

At TODO group, we know the huge value these experienced OSPO leaders can bring to the community since they can help to pave the path for the new generation of OSPOs, cultivating the open source ecosystem. Two main challenges mark 2022:

Provide Structure and Guidance within the OSPO Industry based on the experience of Mature OSPO professionals across sectors and stages.Collaborate with other communities to enhance this guidance

New OSPO challenges are coming, and new TODO milestones and initiatives are taking shape to adapt to help the OSPO movement succeed across sectors. You will hear from TODO 2022 strategic goals and direction news very soon!

The post OSPOlogy: Learnings from OSPOs in 2021 appeared first on Linux Foundation.

A 2021 Linux Foundation Research Year in Review

Through LF Research, the Linux Foundation is uniquely positioned to create the definitive repository of insights into open source. By engaging with our community members and leveraging the full resources of our data sources, including a new and improved LFX, we’re not only shining a light on the scope of the projects that comprise much of the open source paradigm but contextualizing their impact. In the process, we’re creating both a knowledge hub and an ecosystem-wide knowledge network. Because, after all, research is a team sport.

Taking inspiration from research on open innovation, LF Research will explore open source amidst the challenges of the current era. These include challenges like the COVID-19 pandemic, climate risk, and accelerating digital transformation — all changing what it means to be a technology company or an organization that deeply relies on innovation. By publishing a new suite of research deliverables that aid in strategy formation and decision-making, LF Research intends to create shared value for all stakeholders in our community and inspire greater levels of participation in it. 

Completed Core Research

The 2021 Linux Foundation Report on Diversity, Equity, and Inclusion in Open Source, produced in partnership with AWS, CHAOSS, Comcast, Fujitsu, GitHub, GitLab, Hitachi, Huawei, Intel, NEC, Panasonic, Red Hat, Renesas, and VMware, seeks to understand the demographics and dynamics concerning overall participation in open source communities and to identify gaps to be addressed, all as a means to advancing inclusive cultures within open source environments. This research aims to drive data-driven decisions on future programming and interventions to benefit the people who develop and ultimately use open source technologies. Enterprise Digital Transformation, Techlash, Political Polarization, Social Media Ecosystem, and Content Moderation are all cited as trends that have exposed and amplified exclusionary narratives and designs, mandating increased awareness, and recalibrating individual and organizational attention. Beyond the survey findings that identify the state of DEI, this research explores a number of DEI initiatives and their efficacy and recommends action items for the entire stakeholder ecosystem to further their efforts and build inclusion by design.

Core Research in Progress

The Software Bill of Materials (SBOM) Readiness Survey (estimated release: Q1 2022), produced in partnership with the Open Source Security Foundation, OpenChain, and SPDX, is the Linux Foundation’s first project in a series designed to explore ways to better secure the software supply chains. With a focus on SBOMs, the findings are based on a worldwide survey of IT professionals who understand their organization’s approach to software development, procurement, compliance, or security. An important driver for this survey is the recent U.S. Executive Order on Cybersecurity, which focuses on producing and consuming SBOMs. 

Completed Project-Focused Research

The Fourth Annual Open Source Program Management (OSPO) Survey, produced In collaboration with the TODO Group and The New Stack, examines the prevalence and outcomes of open source programs, including the key benefits and barriers to adoption.The 2021 State of Open Source in Financial Services Report produced in partnership with FINOS, Scott Logic, Wipro, and GitHub, explores the state of open source in the financial services sector. The report identifies current levels of consumption and contribution of open source software and standards in this industry and the governance, cultural, and aspirational issues of open source among banks, asset managers, and hedge funds.The 2021 Data and Storage Trends Survey, produced in collaboration with the SODA Foundation, identifies the current challenges, gaps, and trends for data and storage in the era of cloud-native, edge, AI, and 5G.The 9th Annual Open Source Jobs Report, produced in partnership with edX, provides actionable insights on the state of open source talent that employers can use to inform their hiring, training, and diversity awareness efforts.

The post A 2021 Linux Foundation Research Year in Review appeared first on Linux Foundation.

Vision 2022: Open Networking & Edge Predictions

By Arpit Joshipura, GM Networking and Edge, The Linux Foundation

As we wrap up the second year of living through a global pandemic,  I wanted to take a moment to both look ahead to next year, as well as recognize how the open networking and edge industry has shifted over the past year.  Read below for a  list of what we can expect in 2022, as well as a brief “report card” on where my industry predictions from last year landed. 

1. Dis-aggregation will enter the “Re-aggregation” phase (in terms of software, organizations, and industries) 

This will be enabled by Super Blueprints (which bring end-to-end open source projects together), and we’ll see more multi-org collaboration (e.g., Standards Bodies, Alliances, and Foundations) re-aggregating to solve common problems. Edge computing will serve as the glue that binds common IoT frameworks together across vertical industries. 

2. Realists and Visionaries will fight it out for dollars and productivity 

Given that what started as a pandemic could become endemic, there will be an internal tussle between Realists (making money off of 4G), Engineers currently coding 5G,  and Visionaries looking to 6G and beyond. (In other words, the cycle continues). 

3. Security will emerge as the key differentiator in Open source

Collaboration among governments and other global organizations against “bad actors” will penetrate geopolitical walls to bring a global ecosystem together, via open source. 

4. Market Analysts will reinvent themselves

There is no longer a clear way to track Cloud, Telecom, Enterprise, and other markets individually. There is a big market realignment in progress, with new killer use cases. 

5. Seamless Vertical industries will emerge

Enabled by Open Source Software — many vertical industries will not even know (or care) how the pipe traverses across their last mile to central cloud and edges (led by Manufacturing, Retail, Energy, Healthcare & Automotive). 

What did I miss? I would love to have your comments on LinkedIn.

Now let’s take a look at where my predictions from last year actually landed…

See my 2021 predictions from last year: https://www.lfnetworking.org/blog/2020/12/15/predictions-2021-networking-edge/ 

exam sheet with F grade, flat design

Thanking our Communities and Members, and Building Positive Momentum in 2022

We could not imagine what was on the horizon ahead of us as we saw COVID peek its head in late 2019. Locally and globally, we’ve weathered many challenges, adjusted our sails, and applied new tools and approaches to continue our momentum. As we now approach 2022, our hopes aim even higher as we pursue new horizons and strengthen our established communities. We’re emerging stronger and better equipped to tackle these great challenges and your help has made it all possible. 

Your willingness to engage in our local, virtual, and large-scale in-person events were invaluable. These meetings demonstrated that the bonds within our hosted communities and families of open source foundations remain strong. Thank you for coming back to the events and making them successful.

In 2021, we continued to see organizations embrace open collaboration and open source principles, accelerating new innovations, approaches, and best practices. Not only have we seen compelling new project additions this year, but these projects are bringing new organizations into our community. In 2021, the LF welcomed a new organization nearly every day.

As we look to 2022, we see a diverse and growing pipeline of new projects across open source and standards. We see new demand to guide and develop projects in 5G, supply chain security, open data, and open governance networks. Throughout the continuing challenges of 2021, we remain focused on open collaboration as the means for enabling the technologies and solutions of the future. 

We thank our communities and members for your continued confidence in our ability to navigate a challenging business environment and your lasting and productive partnerships. We wish you prosperity and success in 2022.

Our yearly achievements would not be possible without the efforts of the Linux Foundation’s communities and members. Read our 2021 Annual Report here.

The post Thanking our Communities and Members, and Building Positive Momentum in 2022 appeared first on Linux Foundation.