Home Blog Page 282

More About Angle Brackets in Bash

In the previous article, we introduced the subject of angle brackets (< >) and demonstrated some of their uses. Here, we’ll look at the topic from a few more angles. Let’s dive right in.

You can use < to trick a tool into believing the output of a command is data from a file.

Let’s say you are not sure your backup is complete, and you want to check that a certain directory contains all the files copied over from the original. You can try this:

diff <(ls /original/dir/) <(ls /backup/dir/)

diff is a tool that typically compares two text files line by line, looking for differences. Here it gets the output from two ls commands and treats them as if coming from a file and compares them as such.

Note that there is no space between the < and the (...).

Running that on the original and backup of a directory where I save pretty pictures, I get:

diff <(ls /My/Pictures/) <(ls /My/backup/Pictures/) 5d4 < Dv7bIIeUUAAD1Fc.jpg:large.jpg

The < in the output is telling me that there is file (Dv7bIIeUUAAD1Fc.jpg:large.jpg) on the left side of the comparison (in /My/Pictures) that is not on the right side of the comparison (in /My/backup/Pictures), which means copying over has failed for some reason. If diff didn’t cough up any output, it would mean that the list of files were the same.

So, you may be wondering, if you can take the output of a command or command line, make it look like the contents of a file, and feed it to an instruction that is expecting a file, that means that in the sorting by favorite actor example from above, you could’ve done away with the intermediate file and just piped the output from the loop into sort.

In short, yep! The line:

sort -r <(while read -r name surname films;do echo $films $name $surname ; done < CBactors)

does the trick nicely.

Here string! Good string!

There is one more case for redirecting data using angle brackets (or arrows, or whatever you want to call them).

You may be familiar with the practice of passing variables to commands using echo and a pipe (|). Say you want to convert a variable containing a string to uppercase characters because… I don’t know… YOU LIKE SHOUTING A LOT. You could do this:

myvar="Hello World" echo $myvar | tr '[:lower:]' '[:upper:]' HELLO WORLD

The tr command translates strings to different formats. In the example above, you are telling tr to change all the lowercase characters that come along in the string to uppercase characters.

It is important to know that you are not passing on the variable, but only its contents, that is, the string “Hello World“. This is called the here string, as in “it is here, in this context, that we know what string we are dealing with“. But there is shorter, clearer, and all round better way of delivering here strings to commands. Using

tr '[:lower:]' '[:upper:]' <<< $myvar

does the same thing with no need to use echo or a pipe. It also uses angle brackets, which is the whole obsessive point of this article.

Conclusion

Again, Bash proves to give you lots of options with very little. I mean, who would’ve thunk that you could do so much with two simple characters like < and >?

The thing is we aren’t done. There are plenty of more characters that bring meaning to chains of Bash instructions. Without some background, they can make shell commands look like gibberish. Hopefully, post by post, we can help you decipher them. Until next time!

9 Trends to Watch in Systems Engineering and Operations

If your job or business relies on systems engineering and operations, be sure to keep an eye on the following trends in the months ahead.

AIOps

Artificial intelligence for IT operations (AIOps) will allow for improved software delivery pipelines in 2019. This practice incorporates machine learning in order to make sense of data and keep engineers informed about both patterns and problems so they can address them swiftly. Rather than replace current approaches, however, the goal of AIOps is to enhance these processes by consolidating, automating, and updating them. A related innovation, Robotic Process Automation (RPA), presents options for task automation and is expected to see rapid and substantial growth as well.

Knative vs. AWS Lambda vs. Microsoft Azure Functions vs. Google Cloud

The serverless craze is in full swing, and shows no signs of stopping—since December 2017 alone, the technology has grown 22%, and Gartner reports that by 2020, more than 20% of global enterprises will be deploying serverless. This is a huge projected increase from the mere 5% that are currently utilizing it. The advantages of serverless are numerous…

Read more at O’Reilly

Kali Linux Is the Complete Toolbox for Penetration Testing

Every IT infrastructure offers points of attack that hackers can use to steal and manipulate data. Only one thing can prevent these vulnerabilities from being exploited by unwelcome guests: You need to preempt the hackers and identify and close the gaps. Kali Linux can help.

To maintain the security of a network, you need to check it continuously for vulnerabilities and other weak points through penetration testing. You have a clear advantage over attackers because you know the critical infrastructure components, the network topology, points of attack, the services and servers executed, and so on. Exploitation tests should look for vulnerabilities in a secure, real environment, so you can shut down any vulnerabilities found – and you need to do this over and over again.

The variety of IT components dedicated to security does not make selecting a suitable tool any easier, because all possible attack vectors need to be subjected to continuous testing. Kali Linux [1] meets these requirements – and does much more.

Kali Linux at a Glance

The Debian-based Kali Linux distribution is at the heart of most penetration testing systems. …

Kali Linux is particularly resource-friendly and can be run in a virtual machine, so any notebook can become a full-fledged penetration test system with very little effort. Most administrators are familiar with classics like Wireshark and Nmap, so I will focus on the less common applications.

Security Scanners

Penetration testing begins with an overview of the infrastructure and then searches for specific weak points. To do this, you first use a security scanner. Depending on their nature and type, these tools are capable of checking entire networks or individual systems or applications for known weak points.

Read more at ADMIN

Finding Equilibrium in Post-Kubernetes Open-Source Computing

As Kubernetes is leveraged as the foundation for an increasing number of critical enterprise technologies and enables the new industry standard of hybrid cloud, open-source participants are reckoning with both the challenge and opportunity of working within a new collaborative digital economy.

“The scale is coming from real adoption and businesses that are moving their applications into the cloud,” said Liz Rice (pictured), technology evangelist at Aqua Security Software Ltd. and program co-chair at KubeCon + CloudNativeCon. “The end users who want to be part of the community actually want to contribute to the community.”

Rice spoke with John Furrier (@furrier) and Stu Miniman (@stu), co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the KubeCon + CloudNativeCon event in Seattle, Washington. (* Disclosure below.)

This week, theCUBE spotlights Liz Rice in our Women in Tech feature.

Read more at SiliconAngle

How To Delete a Local and Remote Git Branch

Branches are part of the everyday development process and one of the most powerful features in Git. Once a branch is merged, it serves no purpose except for historical research. It is common and recommended practice to delete the branch after a successful merge.

This guide covers how to delete local and remote Git branches.

Delete a Local Git Branch

To delete a local Git branch use the git branch command with the -d (--delete) flag:

git branch -d branch_name
Deleted branch branch_name (was 17d9aa0).

Read more at Linuxize

Project EVE Promotes Cloud-Native Approach to Edge Computing

The LF Edge umbrella organization for open source edge computing that was announced by The Linux Foundation last week includes two new projects: Samsung Home Edge and Project EVE. We don’t know much about Samsung’s project for home automation, but we found out more about Project EVE, which is based on Zededa’s edge virtualization technology. Last week, we spoke with Zededa co-founder Roman Shaposhnik about Project EVE, which provides a cloud-native based virtualization engine for developing and deploying containers for industrial edge computers (see below).

LF Edge aims to establish “an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system.” It is built around The Linux Foundation’s telecom-oriented Akraino Edge Stack, as well as its EdgeX Foundry, an industrial IoT middleware project..

Like the mostly proprietary cloud-to-edge platforms emerging from Google (Google Cloud IoT Edge), Amazon (AWS IoT), Microsoft (Azure Sphere), and most recently Baidu (Open Edge), among others, the LF Edge envisions a world where software running on IoT gateway and edge devices evolves top down from the cloud rather than from the ground up with traditional embedded platforms.

The Linux Foundation, which also supports numerous “ground up” embedded projects such as the Yocto Project and Iotivity, but with LF Edge it has taken a substantial step toward the cloud-centric paradigm. The touted benefits of a cloud-native approach for embedded include easier software development, especially when multiple apps are needed, and improved security via virtualized, regularly updated container apps. Cloud-native edge computing should also enable more effective deployment of cloud-based analytics on the edge while reducing expensive, high-latency cloud communications.

None of the four major cloud operators listed above are currently members of LF Edge, which poses a challenge for the organization. However, there’s already a deep roster of companies onboard, including Arm, AT&T, Dell EMC, Ericsson, HPE, Huawei, IBM, Intel, Nokia Solutions, Qualcomm, Radisys, Red Hat, Samsung, Seagate, and WindRiver (see the LF Edge announcement for the full list.)

With developers coming at the edge computing problem from both the top-down and bottom-up perspectives, often with limited knowledge of the opposite realm, the first step is agreeing on terminology. Back in June, the Linux Foundation launched an Open Glossary of Edge Computing project to address this issue. Now part of LF Edge, the Open Glossary effort “seeks to provide a concise collection of terms related to the field of edge computing.”

There’s no mention of Linux in the announcements for the LF Edge projects, all of which propose open source, OS-agnostic, approaches to edge computing. Yet, there’s no question that Linux will be the driving force here.

Project EVE aims to be the Android of edge computing

Project EVE is developing an “open, agnostic and standardized architecture unifying the approach to developing and orchestrating cloud-native applications across the enterprise edge,” says the Linux Foundation. Built around an open source EVE (Edge Virtualization Engine) version of the proprietary Edge Virtualization X (EVx) engine from Santa Clara startup Zededa, Project EVE aims to reinvent embedded using Docker containers and other open source cloud-native software such as Kubernetes. Cloud-native edge computing’s “simple, standardized orchestration” will enable developers to “extend cloud applications to edge devices safely without the need for specialized engineering tied to specific hardware platforms,” says the project.

Earlier this year, Zededa joined the EdgeX Foundry project, and its technology similarly targets the industrial realm. However, Project EVE primarily concerns the higher application level rather than middleware. The project’s cloud-native approach to edge software also connects it to another LF project: the Cloud Native Computing Foundation.

In addition to its lightweight virtualization engine, Project EVE also provides a zero-trust security framework. In conversation with Linux.com, Zededa co-founder Roman Shaposhnik proposed to consign the word “embedded” to the lower levels of simple, MCU-based IoT devices that can’t run Linux. “To learn embedded you have to go back in time, which is no longer cutting it,” said Shaposhnik. “We have millions of cloud-native software developers who can drive edge computing. If you are familiar with cloud-native, you should have no problem in developing edge-native applications.”

If Shaposhnik is critical of traditional, ground-up embedded development, with all its complexity and lack of security, he is also dismissive of the proprietary cloud-to-edge solutions. “It’s clear that building silo’d end-to-end integration cloud applications is not really flying,” he says, noting the dangers of vendor lock-in and lack of interoperability and privacy.

To achieve the goals of edge computing, what’s needed is a standardized, open source approach to edge virtualization that can work with any cloud, says Shaposhnik. Project EVE can accomplish this, he says, by being the edge computing equivalent of Android.

“The edge market today is where mobile was in the early 2000s,” said Shaposhnik, referring to an era when early mobile OSes such as Palm, BlackBerry, and Windows Mobile created proprietary silos. The iPhone changed the paradigm with apps and other advanced features, but it was the far more open Android that really kicked the mobile world into overdrive.

“Project EVE is doing with edge what Android has done with mobile,” said Shaposhnik. The project’s standardized edge virtualization technology is the equivalent of Android package management and Dalvik VM for Java combined, he added. “As a mobile developer you don’t think about what driver is being used. In the same way our technology protects the developer from hardware complexity.”

Project EVE is based on Zededa’s EVx edge virtualization engine, which currently runs on edge hardware from partners including Advantech, Lanner, SuperMicro, and Scalys. Zededa’s customers are mostly large industrial or energy companies that need timely analytics, which increasingly requires multiple applications.

“We have customers who want to optimize their wind turbines and need predictive maintenance and vibration analytics,” said Shaposhnik. “There are a half dozen machine learning and AI companies that could help, but the only way they can deliver their product is by giving them a new box, which adds to cost and complexity.”

A typical edge computer may need only a handful of different apps rather than the hundreds found on a typical smartphone. Yet, without an application management solution such as virtualized containers, there’s no easy way to host them. Other open source cloud-to-edge solutions that use embedded container technology to provide apps include the Balena IoT fleet management solution from Balena (formerly Resin.io) and Canonical’s container-like Ubuntu Core distribution.

Right now, the focus is on getting the open source version of EVx out the door. Project EVE plans to release a 1.0 version of the EVE in the second quarter along with an SDK for developing EVE edge containers. An app store platform will follow later in the year. More information may be found in this Zededa blog post.

Learn more about LF Edge 

4 Ways To Calculate A Running Total With SQL

Calculating a running total/rolling sum in SQL is a useful skill to have. 

It can often come in handy for reporting and even when developing applications. Sometimes your users might want to see a running total of the points they have gained or perhaps the money they have earned. Like many problems in SQL, there are multiple ways you can solve this problem.

You can use analytic functions, self joins or an aggregate table that tracks the running sum. Here are a few examples(Also, skip down to the bottom if you just want to watch these explanations in video form).

Using An Analytic Function

Using an analytic function is the easiest way to calculate a running total. An analytic function lets you partition data by specific field. For instance, in this case, we can break down the rolling sum by driver_id and month. This will give us the running total by customer and month. So every month will start again at 0.

Read more at Towards Data Science

Bash’s Built-in printf Function

Even if you’re already familiar with the printf command, if you got your information via “man printf” you may be missing a couple of useful features that are provided by bash’s built-in version of the standard printf(1) command.

If you didn’t know bash had its own version of printf, then you didn’t heed the note in the man page for the printf(1) command:

NOTE: your shell may have its own version of printf, which usually supersedes the version described here. Please refer to your shell’s documentation for details about the options it supports.

You did read the man page, didn’t you? I must confess, I’d used printf for quite a while before I realized bash had its own.

To find the documentation for the built-in version of printf, just search for “printf” in the bash man page.

In case you’re completely unfamiliar with the printf command, and similar functions in other languages, a couple quick examples should get you up to speed:

Read more at Linux Journal

7 Reliability Questions Engineering Managers Need to Ask Their Teams

Modern software teams face no shortage of edge cases and variations across the service categories and tiers of their ever-evolving architectures. In the midst of leading a team through the day-to-day firefighting, it can be difficult to see the forest for the trees. But as managers, we know our teams face similar trials: defects and regressions, capacity problems, operational debt and dangerous workloads affect all of us.

And then there is the complexity of scale, something we know about first hand. The New Relic platform includes more than 300 unique services and petabytes of SSD storage that handle at least 40 million HTTP requests, write 1.5 billion new data points, and process trillions of events … every minute. The platform is maintained by more than 50 agile teams performing multiple production releases a week. To cope with serious scale like this, engineering teams must be nimble and fast moving. Their managers must also ensure that their teams adhere to reliability processes that support this kind of complexity and scale.

Read more at The New Stack

Custom Linux Installations

Customize your Linux installation and gain working knowledge of your system at the same time.

Most Linux users are content with a standard installation of their distribution of choice. However, many prefer a custom installation. They may simply prefer to do things their way without dozens of post-install tweaks. Others may want to know exactly what they are installing as a requirement for security. Still others may want a consistent installation for multiple machines or to learn more about their operating system step by step. Linux offers tools for all these purposes.

Admittedly, most of these tools are for major distributions. A survey of these tools shows that many are for time-tested distros like Debian or openSUSE. If you want a custom install of, say, KDE neon or Puppy Linux, you may not find a ready-made solution. But among the major distributions, you are like to find multiple solutions. Read on for some of the main options.

Roll Your Own Desktops

Traditionally, many distributions install with a default desktop. For instance, Fedora and Ubuntu default to Gnome, and Mageia to KDE Plasma. Users who prefer another desktop can choose from a wide range after installation, although often they should think twice, because such distros often install with a range of utilities designed for their default desktop.

Read more at Linux Pro