Home Blog Page 286

Faucet: An Open Source SDN Controller for High-Speed Production Networks

Open standards such as OpenFlow and P4 promised to improve the landscape by opening access to these devices via a programmable API, but they still require someone to write a controller to re-implement normal switch functionality, such as forwarding and routing, in a multi-vendor, standards-compliant way. This led our group to write the Faucet software-defined network (SDN) controller, which allows anyone to fully realize the dream of programmable networks.

Faucet is a compact, open source OpenFlow controller that enables users to run their networks the same way they run server clusters. Faucet makes networking approachable to all by bringing the DevOps workflow to networking. It does this by making network functions (like routing protocols, neighbor discovery, and switching algorithms) easy to manage, test, and extend by moving them to regular software that runs on a server, versus the traditional approach of embedding these functions in the firmware of a switch or router. Faucet works by ingesting a YAML configuration file that represents the network topology and required network functionality, and it does the work to program every device on the network with OpenFlow.

Read more at OpenSource.com

Ansible vs. Puppet: Declarative DevOps Tools Square Off

DevOps aims to drive collaboration between development and operations teams, but software quality drives DevOps adoption more than any other factor. As this comparison of Ansible vs. Puppet shows, software quality dramatically influences DevOps tools.

Software quality tends to be an organizational goal or a staff function, not the dominion of a dedicated group with broad responsibility to implement its decisions. Effective software quality efforts involve everyone from development to production users to ensure real value.

Puppet and Ansible are declarative configuration management and automation tools used in DevOps shops. They both help organizations ensure software quality. Evaluate Ansible vs. Puppet to determine how each product fits the software quality-driven requirements for DevOps.

Read more at TechTarget

An Introduction to the Machine Learning Platform as a Service

Machine-Learning-Platform-as-a-Service (ML PaaS) is one of the fastest growing services in the public cloud. It delivers efficient lifecycle management of machine learning models.

At a high level, there are three phases involved in training and deploying a machine learning model. These phases remain the same from classic ML models to advanced models built using sophisticated neural network architecture.

Provision and Configure Environment

Before the actual training takes place, developers and data scientists need a fully configured environment with the right hardware and software configuration.

Read more at The New Stack

Linux Tools: The Meaning of Dot

Let’s face it: writing one-liners and scripts using shell commands can be confusing. Many of the names of the tools at your disposal are far from obvious in terms of what they do (grep, tee and awk, anyone?) and, when you combine two or more, the resulting “sentence” looks like some kind of alien gobbledygook.

None of the above is helped by the fact that many of the symbols you use to build a chain of instructions can mean different things depending on their context.

Location, location, location

Take the humble dot (.) for example. Used with instructions that are expecting the name of a directory, it means “this directory” so this:

find . -name "*.jpg"

translates to “find in this directory (and all its subdirectories) files that have names that end in .jpg“.

Both ls . and cd . act as expected, so they list and “change” to the current directory, respectively, although including the dot in these two cases is not necessary.

Two dots, one after the other, in the same context (i.e., when your instruction is expecting a directory path) means “the directory immediately above the current one“. If you are in /home/your_directory and run

cd ..

you will be taken to /home. So, you may think this still kind of fits into the “dots represent nearby directories” narrative and is not complicated at all, right?

How about this, then? If you use a dot at the beginning of a directory or file, it means the directory or file will be hidden:

$ touch somedir/file01.txt somedir/file02.txt somedir/.secretfile.txt
$ ls -l somedir/
total 0 
-rw-r--r-- 1 paul paul 0 Jan 13 19:57 file01.txt 
-rw-r--r-- 1 paul paul 0 Jan 13 19:57 file02.txt 
$ # Note how there is no .secretfile.txt in the listing above
$ ls -la somedir/
total 8 
drwxr-xr-x  2 paul paul 4096 Jan 13 19:57 . 
drwx------ 48 paul paul 4096 Jan 13 19:57 .. 
-rw-r--r--  1 paul paul    0 Jan 13 19:57 file01.txt 
-rw-r--r--  1 paul paul    0 Jan 13 19:57 file02.txt 
-rw-r--r--  1 paul paul    0 Jan 13 19:57 .secretfile.txt
$ # The -a option tells ls to show "all" files, including the hidden ones

And then there’s when you use . as a command. Yep! You heard me: . is a full-fledged command. It is a synonym of source and you use that to execute a file in the current shell, as opposed to running a script some other way (which usually mean Bash will spawn a new shell in which to run it).

Confused? Don’t worry — try this: Create a script called myscript that contains the line

myvar="Hello"

and execute it the regular way, that is, with sh myscript (or by making the script executable with chmod a+x myscript and then running ./myscript). Now try and see the contents of myvar with echo $myvar (spoiler: You will get nothing). This is because, when your script plunks “Hello” into myvar, it does so in a separate bash shell instance. When the script ends, the spawned instance disappears and control returns to the original shell, where myvar never even existed.

However, if you run myscript like this:

. myscript

echo $myvar will print Hello to the command line.

You will often use the . (or source) command after making changes to your .bashrc file, like when you need to expand your PATH variable. You use . to make the changes available immediately in your current shell instance.

Double Trouble

Just like the seemingly insignificant single dot has more than one meaning, so has the double dot. Apart from pointing to the parent of the current directory, the double dot (..) is also used to build sequences.

Try this:

echo {1..10}

It will print out the list of numbers from 1 to 10. In this context, .. means “starting with the value on my left, count up to the value on my right“.

Now try this:

echo {1..10..2}

You’ll get 1 3 5 7 9. The ..2 part of the command tells Bash to print the sequence, but not one by one, but two by two. In other words, you’ll get all the odd numbers from 1 to 10.

It works backwards, too:

echo {10..1..2}

You can also pad your numbers with 0s. Doing:

echo {000..121..2}

will print out every even number from 0 to 121 like this:

000 002 004 006 ... 050 052 054 ... 116 118 120 

But how is this sequence-generating construct useful? Well, suppose one of your New Year’s resolutions is to be more careful with your accounts. As part of that, you want to create directories in which to classify your digital invoices of the last 10 years:

mkdir {2009..2019}_Invoices

Job done.

Or maybe you have a hundreds of numbered files, say, frames extracted from a video clip, and, for whatever reason, you want to remove only every third frame between the frames 43 and 61:

rm frame_{043..61..3}

It is likely that, if you have more than 100 frames, they will be named with padded 0s and look like this:

frame_000 frame_001 frame_002 ...

That’s why you will use 043 in your command instead of just 43.

Curly~Wurly

Truth be told, the magic of sequences lies not so much in the double dot as in the sorcery of the curly braces ({}). Look how it works for letters, too. Doing:

touch file_{a..z}.txt

creates the files file_a.txt through file_z.txt.

You must be careful, however. Using a sequence like {Z..a} will run through a bunch of non-alphanumeric characters (glyphs that are neither numbers or letters) that live between the uppercase alphabet and the lowercase one. Some of these glyphs are unprintable or have a special meaning of their own. Using them to generate names of files could lead to a whole bevy of unexpected and potentially unpleasant effects.

One final thing worth pointing out about sequences encased between {...} is that they can also contain lists of strings:

touch {blahg, splurg, mmmf}_file.txt

Creates blahg_file.txt, splurg_file.txt and mmmf_file.txt.

Of course, in other contexts, the curly braces have different meanings (surprise!). But that is the stuff of another article.

Conclusion

Bash and the utilities you can run within it have been shaped over decades by system administrators looking for ways to solve very particular problems. To say that sysadmins and their ways are their own breed of special would be an understatement. Consequently, as opposed to other languages, Bash was not designed to be user-friendly, easy or even logical.

That doesn’t mean it is not powerful — quite the contrary. Bash’s grammar and shell tools may be inconsistent and sprawling, but they also provide a dizzying range of ways to do everything you can possibly imagine. It is like having a toolbox where you can find everything from a power drill to a spoon, as well as a rubber duck, a roll of duct tape, and some nail clippers.

Apart from fascinating, it is also fun to discover all you can achieve directly from within the shell, so next time we will delve ever deeper into how you can build bigger and better Bash command lines.

Until then, have fun!

How to Use Netcat to Quickly Transfer Files Between Linux Computers

There’s no shortage of software solutions that can help you transfer files between computers. However, if you do this very rarely, the typical solutions such as NFS and SFTP (through OpenSSH) might be overkill. Furthermore, these services are permanently open to receiving and handling incoming connections. Configured incorrectly, this might make your device vulnerable to certain attacks.

netcat, the so-called “TCP/IP swiss army knife,” can be used as an ad-hoc solution for transferring files through local networks or the Internet. It’s also useful for transferring data to/from your virtual machines or containers when they don’t include the feature out of the box. You can even use it as a copy-paste mechanism between two devices.

Most Linux-based operating systems come with this pre-installed. Open a terminal and type:

Read more at MakeTechEasier

How Enterprise IT Pros Can Contribute to Open Source Projects

Undoubtedly, your company uses open source software. But the powers that be might express reluctance when developers want to create or maintain projects on company time. Here is a roadmap to help you convince them otherwise—starting with an internal open source project office.

Open source innovation has a methodology all its own, and it doesn’t follow traditional business processes. The big difference is that open source development is collaborative rather than competitive. This attitude may come naturally to IT people, but not to managers and rarely to people in the C-suite….

To change the corporate attitude about permitting developers to be embedded in open source projects, you need to get other departments to see the benefits in their own terms.

One way to handle this is by finding allies outside software development circles. For instance, human resources execs could be on your side if you can convince them that companies that support open source development are more attractive to prospective employees. A CFO who is motivated by financial cost savings can “do the numbers” for you to, for argument’s sake, demonstrate that investing in a developer who spends 20 hours weekly working on an open source project is still more cost effective than purchasing a not-quite-right IT application.

Read more at HPE

Dell Opens Up About Its Linux Efforts And Project Sputnik

Dell’s Barton George has been on a crusade to change how Linux is perceived by both consumers and developers. Six years ago Dell granted George and his team a $40K innovation fund and some freedom to launch a Linux-powered laptop aimed at developers, that would be developed in the open with feedback from the community. This of course became Project Sputnik and began with the outstanding Dell XPS 13 Developer Edition, a laptop designed to “just work” out of the box with Ubuntu. …

Read more at Forbes

Key Resources for Effective, Professional Open Source Management

At organizations everywhere, managing the use of open source software well requires the participation of business executives, the legal team, software architecture, software development and maintenance staff and product managers. One of the most significant challenges is integrating all of these functions with their very different points of view into a coherent and efficient set of practices.

More than ever, it makes sense to investigate the many free and inexpensive resources for open source management that are available, and observe the practices of professional open source offices that have been launched within companies ranging from Microsoft to Oath to Red Hat.

Fundamentals

The Linux Foundation’s Fundamentals of Professional Open Source Management (LFC210) course is a good place to start. The course is explicitly designed to help individuals in disparate organizational roles understand the best practices for success.

The course is organized around the key phases of developing a professional open source management program:

  • Open Source Software and Open Source Management Basics
  • Open Source Management Strategy
  • Open Source Policy
  • Open Source Processes
  • Open Source Management Program Implementation

Best Practices

The Linux Foundation also offers a free ebook on open source management: Enterprise Open Source: A Practical IntroductionThe 45-page ebook can teach you how to accelerate your company’s open source efforts, based on the experience of hundreds of companies spanning more than two decades of professional enterprise open source management. The ebook covers:

  • Why use open source
  • Various open source business models
  • How to develop your own open source strategy
  • Important open source workflow practices
  • Tools and integration

Official open source programs play an increasingly significant role in how DevOps and open source best practices are adopted by organizations, according to a survey conducted by The New Stack and The Linux Foundation (via the TODO Group). More than half of respondents to the survey (53 percent) across many industries said their organization has an open source software program or has plans to establish one.

More than anything, open source programs are responsible for fostering open source culture,” the survey’s authors have reported. “By creating an open source culture, companies with open source programs see the benefits we’ve previously reported, including increased speed and agility in the development cycle, better license compliance and more awareness of which open source projects a company’s products depend on.”

Free Guides

How can your organization professionally create and manage a successful open source program, with proper policies and a strong organizational structure? The Linux Foundation offers a complete guide to the process, available here for free. The guide covers an array of topics for open source offices including: roles and responsibilities, corporate structures, elements of an open source management program, how to choose and hire an open source program manager, and more.

The free guide also features contributions from open source leaders. “The open source program office is an essential part of any modern company with a reasonably ambitious plan to influence various sectors of software ecosystems,” notes John Mark Walker, Founder of the Open Source Entrepreneur Network (OSEN) in the guide. “If a company wants to increase its influence, clarify its open source messaging, maximize the clout of its projects, or increase the efficiency of its product development, a multifaceted approach to open source programs is essential.”  

Interested in even more on professional open source management? Don’t miss The Linux Foundation’s other free guides, which delve into tools for open source management, how to measure the success of an open source program, and much more.

This article originally appeared at The Linux Foundation

(Don’t) Return to Sender: How to Protect Yourself From Email Tracking

There are a lot of different ways to track email, and different techniques can lie anywhere on the spectrum from marginally acceptable to atrocious. Responsible tracking should aggregate a minimal amount of anonymous data, similar to page hits: enough to let the sender get a sense of how well their campaign is doing without invading users’ privacy. Email tracking should always be disclosed up-front, and users should have a clear and easy way to opt out if they choose to. Lastly, organizations that track should minimize and delete user data as soon as possible according to an easy-to-understand data retention and privacy policy.

Unfortunately, that’s often not how it happens. Many senders, including the U.S. government, do email tracking clumsily. Bad email tracking is ubiquitous, secretive, pervasive, and leaky. It can expose sensitive information to third parties and sometimes even others on your network. According to a comprehensive study from 2017, 70% of mailing list emails contain tracking resources. To make matters worse, around 30% of mailing list emails also leak your email address to third party trackers when you open them. And although it wasn’t mentioned in the paper, a quick survey we did of the same email dataset they used reveals that around 80% of these links were over insecure, unencrypted HTTP. 

Here are some friendly suggestions to help make tracking less pervasive, less creepy, and less leaky.

Read more at EFF

Distributed Systems: A Quick and Simple Definition

The technology landscape has evolved into an always-on environment of mobile, social, and cloud applications where programs can be accessed and used across a multitude of devices.

These always-on and always-available expectations are handled by distributed systems, which manage the inevitable fluctuations and failures of complex computing behind the scenes.

“The increasing criticality of these systems means that it is necessary for these online systems to be built for redundancy, fault tolerance, and high availability,” writes Brendan Burns, distinguished engineer at Microsoft, in Designing Distributed Systems. “The confluence of these requirements has led to an order of magnitude increase in the number of distributed systems that need to be built.”

In Distributed Systems in One Lesson, developer relations leader and teacher Tim Berglund says a simple way to think about distributed systems is that they are a collection of independent computers that appears to its user as a single computer.

Read more at O’Reilly