Home Blog Page 268

The Central Security Project: Vulnerability Reporting for Open Source Java

When a security researcher finds a security bug, what do they do? Unfortunately, the answer sometimes is they search for the appropriate people to notify and, when they can’t be found, end up posting the vulnerability to public email lists, the GitHub project, or even Twitter.

This is the problem that security platform HackerOne and software supply chain management tool Sonatype have teamed up to solve with The Central Security Project, a new effort that “brings together the ethical hacker and open source communities to streamline the process for reporting and resolving vulnerabilities discovered in libraries housed in The Central Repository, the world’s largest collection of open source components,” according to a statement.

“We have a critical need to centralize security reporting in the open source industry especially given the proliferation of ecosystems like Github which encourage decentralization,” said Blevins. “The Central Security Project is a significant industry milestone that creates an open source reporting ecosystem that can function at GitHub scale.”

Read more at The New Stack

Kubernetes 1.14 Enhances Cloud-Native Platform With Windows Nodes

The first major update of the open-source Kubernetes cloud-native platform in 2019 was released on March 25, with the general availability of Kubernetes 1.14.

Kubernetes is a broadly deployed container orchestration system project that is hosted by the Cloud Native Computing Foundation (CNCF) and benefits from a diverse set of contributors and vendors that support and develop the project. With Kubernetes 1.14, the project is adding 10 new enhancements as stable features that provide new capabilities for users. Among the biggest enhancements is production-level support for Windows nodes.

“I’m proud of just that fact that in Kubernetes 1.14 there are more stable enhancements than any other previous Kubernetes release,” Aaron Crickenberger, Google test engineer and Kubernetes 1.14 release lead, told eWEEK. “The continued focus on stability speaks to this community’s commitment.”

One of the biggest changes overall in Kubernetes 1.14 wasn’t any one specific feature, but rather a new process for defining how and when enhancements are accepted and move through the Kubernetes development cycle. The Kubernetes Enhancement Proposal (KEP) approach was first implemented for Kubernetes 1.14 and helped Crickenberger and the broader community manage the enhancements process.

Read more at eWeek

Using Square Brackets in Bash: Part 1

After taking a look at how curly braces ({}) work on the command line, now it’s time to tackle brackets ([]) and see how they are used in different contexts.

Globbing

The first and easiest use of square brackets is in globbing. You have probably used globbing before without knowing it. Think of all the times you have listed files of a certain type, say, you wanted to list JPEGs, but not PNGs:

ls *.jpg

Using wildcards to get all the results that fit a certain pattern is precisely what we call globbing.

In the example above, the asterisk means “zero or more characters“. There is another globbing wildcard, ?, which means “exactly one character“, so, while


ls d*k*

will list files called darkly and ducky (and dark and duck — remember * can also be zero characters),


ls d*k?

will not list darkly (or dark or duck), but it will list ducky.

Square brackets are used in globbing for sets of characters. To see what this means, make directory in which to carry out tests, cd into it and create a bunch of files like this:


touch file0{0..9}{0..9}

(If you don’t know why that works, take a look at the last installment that explains curly braces {}).

This will create files file000, file001, file002, etc., through file097, file098 and file099.

Then, to list the files in the 70s and 80s, you can do this:


ls file0[78]?

To list file022, file027, file028, file052, file057, file058, file092, file097, and file98 you can do this:


ls file0[259][278]

Of course, you can use globbing (and square brackets for sets) for more than just ls. You can use globbing with any other tool for listing, removing, moving, or copying files, although the last two may require a bit of lateral thinking.

Let’s say you want to create duplicates of files file010 through file029 and call the copies archive010, archive011, archive012, etc..

You can’t do:


cp file0[12]? archive0[12]?

Because globbing is for matching against existing files and directories and the archive… files don’t exist yet.

Doing this:


cp file0[12]? archive0[1..2][0..9]

won’t work either, because cp doesn’t let you copy many files to other many new files. Copying many files only works if you are copying them to a directory, so this:


mkdir archive

cp file0[12]? archive

would work, but it would copy the files, using their same names, into a directory called archive/. This is not what you set out to do.

However, if you look back at the article on curly braces ({}), you will remember how you can use % to lop off the end of a string contained in a variable.

Of course, there is a way you can also lop of the beginning of string contained in a variable. Instead of %, you use #.

For practice, you can try this:


myvar="Hello World"

echo Goodbye Cruel ${myvar#Hello}

It prints “Goodbye Cruel World” because #Hello gets rid of the Hello part at the beginning of the string stored in myvar.

You can use this feature alongside your globbing tools to make your archive duplicates:


for i in file0[12]?;

do

cp $i archive${i#file};

done

The first line tells the Bash interpreter that you want to loop through all the files that contain the string file0 followed by the digits 1 or 2, and then one other character, which can be anything. The second line do indicates that what follows is the instruction or list of instructions you want the interpreter to loop through.

Line 3 is where the actually copying happens, and you use the contents of the loop variable i twice: First, straight out, as the first parameter of the cp command, and then you add archive to its contents, while at the same time cutting of file. So, if i contains, say, file019


"archive" + "file019" - "file" = "archive019"

the cp line is expanded to this:


cp file019 archive019

Finally, notice how you can use the backslash to split a chain of commands over several lines for clarity.

In part two, we’ll look at more ways to use square brackets. Stay tuned.

Why The CDF Launch From Linux Foundation Is Important For the DevOps And Cloud Native Ecosystem

Earlier this month, the Linux Foundation announced the formation of Continuous Delivery Foundation (CDF) – a new foundation that joins the likes of Cloud Native Computing Foundation (CNCF) and Open Container Initiative (OCI).

Continuous Integration and Continuous Delivery (CI/CD) has become an essential building block of modern application lifecycle management. This technique allows business to increase the velocity of delivering software to users. Through CI/CD, what was once confined to large, web-scale companies became available to early-stage startups and enterprises.

With the launch of CDF, Linux Foundation has taken the first step in bringing the most popular CI/CD tools under the same roof. This would enable key contributors such as CloudBees, Netflix, Google, and other members to collaborate rather than duplicating the efforts. The foundation would encourage members to contribute to critical areas of software delivery with an increased focus. This immensely helps the ecosystem in adopting the best of the breed tools and the best practices of implementing CI/CD pipelines. 

Read more at Forbes

Move your Dotfiles to Version Control

There is something truly exciting about customizing your operating system through the collection of hidden files we call dotfiles. In What a Shell Dotfile Can Do For You, H. “Waldo” Grunenwald goes into excellent detail about the why and how of setting up your dotfiles. Let’s dig into the why and how of sharing them.

What’s a dotfile?

“Dotfiles” is a common term for all the configuration files we have floating around our machines. These files usually start with a . at the beginning of the filename, like .gitconfig, and operating systems often hide them by default. For example, when I use ls -a on MacOS, it shows all the lovely dotfiles that would otherwise not be in the output.

dotfiles on master
➜ ls
README.md  Rakefile   bin       misc    profiles   zsh-custom

dotfiles on master
➜ ls -a
.               .gitignore      .oh-my-zsh      README.md       zsh-custom
..              .gitmodules     .tmux           Rakefile
.gemrc          .global_ignore .vimrc           bin
.git            .gvimrc         .zlogin         misc
.gitconfig      .maid           .zshrc          profiles

If I take a look at one, .gitconfig, which I use for Git configuration, I see a ton of customization. I have account information, terminal color preferences, and tons of aliases that make my command-line interface feel like mine. 

Read more at OpenSource.com

Text Processing in Rust

Create handy command-line utilities in Rust.

This article is about text processing in Rust, but it also contains a quick introduction to pattern matching, which can be very handy when working with text.

Strings are a huge subject in Rust, which can be easily realized by the fact that Rust has two data types for representing strings as well as support for macros for formatting strings. However, all of this also proves how powerful Rust is in string and text processing.

Apart from covering some theoretical topics, this article shows how to develop some handy yet easy-to-implement command-line utilities that let you work with plain-text files. If you have the time, it’d be great to experiment with the Rust code presented here, and maybe develop your own utilities.

Rust and Text

Rust supports two data types for working with strings: String and str. The String type is for working with mutable strings that belong to you, and it has length and a capacity property. On the other hand, the str type is for working with immutable strings that you want to pass around. You most likely will see an str variable be used as &str. Put simply, an str variable is accessed as a reference to some UTF-8 data. An str variable is usually called a “string slice” or, even simpler, a “slice”. Due to its nature, you can’t add and remove any data from an existing str variable.

Read more at Linux Journal

How Open Source Is Accelerating NFV Transformation

Red Hat is noted for making open source a culture and business model, not just a way of developing software, and its message of open source as the path to innovation resonates on many levels.

In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, who gave a keynote address at last year’s event, to hear his thoughts regarding the role of open source in innovation for telecommunications service providers.

One reason for open source’s broad acceptance in this industry, he said, was that some very successful projects have grown too large for any one company to manage, or single-handedly push their boundaries toward additional innovative breakthroughs.

“There are projects now, like Kubernetes, that are too big for any one company to do. There’s technology that we as an industry need to work on, because no one company can push it far enough alone,” said Nadeau. “Going forward, to solve these really hard problems, we need open source and the open source software development model.”

Here are more insights he shared on how and where open source is making an innovative impact on telecommunications companies.

Linux.com: Why is open source central to innovation in general for telecommunications service providers?

Nadeau: The first reason is that the service providers can be in more control of their own destiny. There are some service providers that are more aggressive and involved in this than others. Second, open source frees service providers from having to wait for long periods for the features they need to be developed.

And third, open source frees service providers from having to struggle with using and managing monolith systems when all they really wanted was a handful of features. Fortunately, network equipment providers are responding to this overkill problem. They’re becoming much more flexible, more modular, and open source is the best means to achieve that.

Linux.com: In your ONS keynote presentation, you said open source levels the playing field for traditional carriers in competing with cloud-scale companies in creating digital services and revenue streams. Please explain how open source helps.

Nadeau: Kubernetes again. OpenStack is another one. These are tools that these businesses really need, not to just expand, but to exist in today’s marketplace. Without open source in that virtualization space, you’re stuck with proprietary monoliths, no control over your future, and incredibly long waits to get the capabilities you need to compete.

There are two parts in the NFV equation: the infrastructure and the applications. NFV is not just the underlying platforms, but this constant push and pull between the platforms and the applications that use the platforms.

NFV is really virtualization of functions. It started off with monolithic virtual machines (VMs). Then came “disaggregated VMs” where individual functions, for a variety of reasons, were run in a more distributed way. To do so meant separating them, and this is where SDN came in, with the separation of the control plane from the data plane. Those concepts were driving changes in the underlying platforms too, which drove up the overhead substantially. That in turn drove interest in container environments as a potential solution, but it’s still NFV.

You can think of it as the latest iteration of SOA with composite applications. Kubernetes is the kind of SOA model that they had at Google, which dropped the worry about the complicated networking and storage underneath and simply allowed users to fire up applications that just worked. And for the enterprise application model, this works great.

But not in the NFV case. In the NFV case, in the previous iteration of the platform at OpenStack, everybody enjoyed near one-for-one network performance. But when we move it over here to OpenShift, we’re back to square one where you lose 80% of the performance because of the latest SOA model that they’ve implemented. And so now evolving the underlying platform rises in importance, and so the pendulum swing goes, but it’s still NFV. Open source allows you to adapt to these changes and influences effectively and quickly. Thus innovations happen rapidly and logically, and so do their iterations.

Linux.com: Tell us about the underlying Linux in NFV, and why that combo is so powerful.

Nadeau: Linux is open source and it always has been in some of the purest senses of open source. The other reason is that it’s the predominant choice for the underlying operating system. The reality is that all major networks and all of the top networking companies run Linux as the base operating system on all their high-performance platforms. Now it’s all in a very flexible form factor. You can lay it on a Raspberry Pi, or you can lay it on a gigantic million-dollar router. It’s secure, it’s flexible, and scalable, so operators can really use it as a tool now.

Linux.com: Carriers are always working to redefine themselves. Indeed, many are actively seeking ways to move out of strictly defensive plays against disruptors, and onto offense where they ARE the disruptor. How can network function virtualization (NFV) help in either or both strategies?

Nadeau: Telstra and Bell Canada are good examples. They are using open source code in concert with the ecosystem of partners they have around that code which allows them to do things differently than they have in the past. There are two main things they do differently today. One is they design their own network. They design their own things in a lot of ways, whereas before they would possibly need to use a turnkey solution from a vendor that looked a lot, if not identical, to their competitors’ businesses.

These telcos are taking a real “in-depth, roll up your sleeves” approach. ow that they understand what they’re using at a much more intimate level, they can collaborate with the downstream distro providers or vendors. This goes back to the point that the ecosystem, which is analogous to partner programs that we have at Red Hat, is the glue that fills in gaps and rounds out the network solution that the telco envisions.

Learn more at Open Networking Summit, happening April 3-5 at the San Jose McEnery Convention Center.

How to Monitor Disk IO in Linux

iostat is used to get the input/output statistics for storage devices and partitions. iostat is a part of the sysstat package. With iostat, you can monitor the read/write speeds of your storage devices (such as hard disk drives, SSDs) and partitions (disk partitions). In this article, I am going to show you how to monitor disk input/output using iostat in Linux. So, let’s get started.

Installing iostat on Ubuntu/Debian:

The iostat command is not available on Ubuntu/Debian by default. But, you can easily install the sysstat package from the official package repository of Ubuntu/Debian using the APT package manager. iostat is a part of the sysstat package as I’ve mentioned before.

First, update the APT package repository cache with the following command:

sudo apt update

Read more at LinuxHint

Linux Desktop News: Zorin OS 15 Gets New Touch Interface, Android Sync And Native Flatpak Support

One of the things I love about using Linux is how connected you feel to the community. That’s especially true when the actual creator and CEO of a Linux desktop OS reaches out and personally invites you to give it test drive. And after reading what’s in store for Zorin OS 15 (currently in beta), this one just climbed higher on my list of distributions to discover. … Read more at Forbes

Future of the Firm

The “future of the firm” is a big deal. As jobs become more automated, and people more often work in teams, with work increasingly done on a contingent and contract basis, you have to ask: “What does a firm really do?” Yes, successful businesses are increasingly digital and technologically astute. But how do they attract and manage people in a world where two billion people work part-time? How do they develop their workforce when automation is advancing at light speed? And how do they attract customers and full-time employees when competition is high and trust is at an all-time low?

When thinking about the big-picture items affecting the future of the firm, we identified several topics that we discuss in detail in this report:

Trust, responsibility, credibility, honesty, and transparency.

Customers and employees now look for, and hold accountable, firms whose values reflect their own personal beliefs. We’re also seeing a “trust shakeout,” where brands that were formerly trusted lose trust, and new companies build their positions based on ethical behavior. And companies are facing entirely new “trust risks” in social media, hacking, and the design of artificial intelligence (AI) and machine learning (ML) algorithms.

The search for meaning.

Employees don’t just want money and security; they want satisfaction and meaning. They want to do something worthwhile with their lives.

New leadership models and generational change.

Firms of the 20th century were based on hierarchical command and control models. Those models no longer work. In successful firms, leaders rely on their influence and trustworthiness, not their position.

Read more at O’Reilly