Home Blog Page 279

How Zowe Is Bringing the Mainframe into the Modern Age

The goal of the Zowe project is to create a framework that enables developers to bring their latest tools to work on the mainframe. IBM, Broadcom, and Rocket Software worked together and open sourced their own technologies to achieve this. According to the project website, Zowe provides various components, including an app framework and a command-line interface, which lets users interact with the mainframe remotely and use integrated development environments, shell commands, Bash scripts, and other tools. It also provides utilities and services to help developers quickly learn how to support and build z/OS applications.

“What Zowe allows both end users and developers to do is enable a newer generation of users and developers to have access to all the critical data within all these financial, retail, and insurance systems living on the mainframe,” she said.

The fact is that almost all of the critical mainframe applications were written decades ago. Some of these companies are more than 100 years old, and they are using mainframe systems for their mission-critical workloads. So, what Zowe is trying to achieve is to open source some of these technologies to help companies bring their existing workloads into the modern day. This will also allow them to attract a new generation of users and developers.

Read more and watch the complete video interview at The Linux Foundation.

15 Docker Commands You Should Know

In this article we’ll look at 15 Docker CLI commands you should know. If you haven’t yet, check out the rest of this series on Docker conceptsthe ecosystemDockerfiles, and keeping your images slim. In Part 6 we’ll explore data with Docker. I’ve got a series on Kubernetes in the works too, so follow me to make sure you don’t miss the fun!

There are about a billion Docker commands (give or take a billion). The Docker docs are extensive, but overwhelming when you’re just getting started. In this article I’ll highlight the key commands for running vanilla Docker.

Overview

Recall that a Docker image is made of a Dockerfile + any necessary dependencies. Also recall that a Docker container is a Docker image brought to life. To work with Docker commands, you first need to know whether you’re dealing with an image or a container.

  • A Docker image either exists or it doesn’t.
  • A Docker container either exists or it doesn’t.
  • A Docker container that exists is either running or it isn’t.

Read more at Towards Data Science

What Open Source Really Means Today

The state of open source over the course of the past few decades has certainly changed. IBM last year purchased Red Hat, for example. But the original open source spirit of sharing remains intact — though the extent to which that is the case remains a subject of debate.

What open source really means today and how it has evolved were major themes of a podcast Alex Williams, founder and editor-in-chief of The New Stack, recently hosted at KubeCon + CloudNativeCon in Seattle. Among the open source thought leaders on hand to offer their observations were:

The differences between open source culture, and its underground-like feel over 20 years ago, and today’s explosion in commercial software enterprises based on open source code are stark, indeed.

Read more at The New Stack

The Linux Command-Line Cheat Sheet

When coming up to speed as a Linux user, it helps to have a cheat sheet that can help introduce you to some of the more useful commands.

In the tables below, you’ll find sets of commands with simple explanations and usage examples that might help you or Linux users you support become more productive on the command line.

Getting familiar with your account

These commands will help new Linux users become familiar with their Linux accounts.

Read more at Network World

Pine64 to Launch Open Source Phone, Laptop, Tablet, and Camera

At FOSDEM last weekend, California-based Linux hacker board vendor Pine64 previewed an extensive lineup of open source hardware it intends to release in 2019. Surprisingly, only two of the products are single board computers.

The Linux-driven products will include a PinePhone development kit based on the Allwinner A64. There will be second, more consumer focused Pinebook laptop — a Rockchip RK3399 based, 14-inch Pinebook Pro — and an Allwinner A64-based, 10.1-inch PineTab tablet. Pine64 also plans to release an Allwinner S3L-driven IP camera system called the CUBE and a Roshambo Retro-Gaming case that supports Pine64’s Rock64 and RockPro64, as well as the Raspberry Pi.

The SBC entries are a Pine H64 Model B that will be replace the larger, but similarly Allwinner H6 based, Model A version and will add WiFi/Bluetooth. There will also be a third rev of the popular, RK3399 based Rock64 board that adds Power-over-Ethernet support.

The launch of the phone, laptop, tablet, and camera represents the most ambitious expansion to date by an SBC vendor to new open source hardware form factors. As we noted last month in our hacker board analysis piece, community-based SBC projects are increasingly specializing to survive in today’s Raspberry Pi dominated market. In a Feb. 1 Tom’s Hardware story, RPi Trading CEO Eben Upton confirmed our speculation that a next generation Raspberry Pi 4 that moves beyond 40nm fabrication will not likely ship until 2020. That offers a window of opportunity for other SBC vendors to innovate.

It’s a relatively short technical leap — but a larger marketing hurdle — to move from a specialized SBC to a finished consumer electronics or industrial device. Still, we can expect a few more vendors to follow Pine64’s lead in building on their SBCs, Linux board support packages, and hacker communities to launch more purpose-built consumer electronics and industrial gear.

Already, community projects have begun offering a more diverse set of enclosures and other accessories to turn their boards into mini-PCs, IoT gateways, routers, and signage systems. Meanwhile, established embedded board vendors are using their community-backed SBC platforms as a foundation for end-user products. Acer subsidiary Aaeon, for example, has spun off its UP boards into a variety of signage systems, automation controllers, and AI edge computing systems.

So far, most open source, Linux phone and tablet alternatives have emerged from open source software projects, such as Mozilla’s Firefox OS, the Ubuntu project’s Ubuntu Phone, and the Jolla phone. Most of these alternative mobile Linux projects have either failed, faded, or never took off.  

Some of the more recent Linux phone projects, such as the PiTalk and ZeroPhone, have been built around the Raspberry Pi platform. The PinePhone and PineTab would be even more open source given that the mainboards ship with full schematics.

Unlike many hacker board projects, the Pine64 products offer software tied to mainline Linux. This is easier to do with the Rockchip designs, but it’s been a slower road to mainline for Allwinner. Work by Armbian and others have now brought several Allwinner SoCs up to speed.

Working from established hardware and software platforms may offer a stronger foundation for launching mobile Android alternatives than a software-only project. “The idea, in principle, is to build convergent device-ecosystems (SBC, Module, Laptop/Tablet/ Phone / Other Devices) based on SOCs that we’ve already have developers engaged with and invested in,” says the Pine64 blog announcement.

Here’s a closer look at Pine64’s open hardware products for 2019:

PinePhone Development Kit

PinePhone Development Kit — Based on the quad -A53 Allwinner A64 driven SoPine A64 module, the PinePhone will run mainline Linux and support alternative mobile platforms such as UBPorts, Maemo Leste, PostmarketOS, and Plasma Mobile. It can also run Unity 8 and KDE Plasma with Lima. This upgradable, modular phone kit will be available soon in limited quantity and will be spun off later this year or in 2020 into an end-user phone with a target price of $149.

The PinePhone kit includes 2GB LPDDR3, 32GB eMMC, and a small 1440 x 720-pixel LCD screen. There’s a 4G LTE module with Cat 4 150Mb downlink, a battery, and 2- and 5MP cameras. Other features include WiFi/BT, microSD, HDMI, MIPI I/O, sensors, and privacy hardware switches.

Pinebook Pro — Like many of the upcoming Pine64 products, the original Pinebooks are limited edition developer systems. The Pinebook Pro, however, is aimed at a broader audience that might be considering a Chromebook. This second-gen Pro laptop will not replace the $99 and up 11.6-inch version of the Pinebook. The original 14-inch version may receive an upgrade to make it more like the Pro.

The $199 Pinebook Pro advances from the quad-core, Cortex-A53 Allwinner H64 to a hexa-core -A53 and -A72 Rockchip RK3399. It supports mainline Linux and BSD.

The more advanced features include a higher-res 14-inch, 1080p screen, now with IPS, as well as twice the RAM (4GB LPDDR4). It also offers four times the storage at 64GB, with a 128GB option for registered developers. Other highlights include USB 3.0 and 2.0 ports, a USB Type-C port that supports DP-like 4K@60Hz video, a 10,000 mAh battery, and an improved 2-megapixel camera. There’s also an option for an M.2 slot that supports NVMe storage.

PineTab — The PineTab is like a slightly smaller, touchscreen-enabled version of the first-gen Pinebook, but with the keyboard optional instead of built-in. The magnetically attached keyboard has a trackpad and can fold up to act as a screen cover.

Like the original Pinebooks, the PineTab runs Linux or BSD on an Allwinner A64 with 2GB of LPDDR3 and 16GB eMMC. The 10-inch IPS touchscreen is limited to 720p resolution. Other features include WiFi/BT, USB, micro-USB, microSD, speaker, mic, and dual cameras.

Pine64 notes that touchscreen-ready Linux apps are currently in short supply. The PineTab will soon be available for $79, or $99 with the keyboard.

The CUBE — This “early concept” IP camera runs on the Allwinner S3L — a single-core, Cortex-A7 camera SoC. It ships with a MIPI-CSI connected, 8MP Sony iMX179 CMOS camera with an m12 mount for adding different lenses.

The CUBE offers 64MB or 128MB RAM, a WiFi/BT module, plus a 10/100 Ethernet port with Power-over-Ethernet (PoE) support. Other features include USB, microSD, and 32-pin GPIO. Target price: about $20.

Roshambo Retro-Gaming — This retro gaming case and accessory set from Pine64’s Chinese partner Roshambo will work with Pine64’s Rock64 SBC, which is based on the quad -A53 Rockchip RK3328, or its RK3399 based RockPro64. It can also accommodate a Raspberry Pi. The $30 Super Famicom inspired case will ship with an optional $13 gaming controller set. Other features include buttons, switches, a SATA slot, and cartridge-shaped 128GB ($25) or 256GB ($40) SSDs.

Rock64 Rev 3 — Pine64 says it will continue to focus primarily on SBCs, although the only 2019 products it is disclosing are updates to existing designs. The Rock64 Rev 3 improves upon Pine64’s RK3399-based RPi lookalike, which it says has been its most successful board yet. New features include PoE, RTC, improved RPi 2 GPIO compatibility, and support for high-speed microSD cards. Pricing stays the same.

Pine H64 Model B — The Pine H64 Model B will replace the currently unavailable Pine H64 Model A, which shipped in limited quantities. The board trims down to a Rock64 (and Raspberry Pi) footprint, enabling use of existing cases, and adds WiFi/BT. It sells for $25 (1GB LPDDR3 RAM), $35 (2GB), and $45 (3GB).

And, Ampersand, and & in Linux

Take a look at the tools covered in the three previous articles, and you will see that understanding the glue that joins them together is as important as recognizing the tools themselves. Indeed, tools tend to be simple, and understanding what mkdir, touch, and find do (make a new directory, update a file, and find a file in the directory tree, respectively) in isolation is easy.

But understanding what

mkdir test_dir 2>/dev/null || touch images.txt && find . -iname "*jpg" > backup/dir/images.txt &

does, and why we would write a command line like that is a whole different story.

It pays to look more closely at the sign and symbols that live between the commands. It will not only help you better understand how things work, but will also make you more proficient in chaining commands together to create compound instructions that will help you work more efficiently.

In this article and the next, we’ll be looking at the the ampersand (&) and its close friend, the pipe (|), and see how they can mean different things in different contexts.

Behind the Scenes

Let’s start simple and see how you can use & as a way of pushing a command to the background. The instruction:

cp -R original/dir/ backup/dir/

Copies all the files and subdirectories in original/dir/ into backup/dir/. So far so simple. But if that turns out to be a lot of data, it could tie up your terminal for hours.

However, using:

cp -R original/dir/ backup/dir/ &

pushes the process to the background courtesy of the final &. This frees you to continue working on the same terminal or even to close the terminal and still let the process finish up. Do note, however, that if the process is asked to print stuff out to the standard output (like in the case of echo or ls), it will continue to do so, even though it is being executed in the background.

When you push a process into the background, Bash will print out a number. This number is the PID or the Process’ ID. Every process running on your Linux system has a unique process ID and you can use this ID to pause, resume, and terminate the process it refers to. This will become useful later.

In the meantime, there are a few tools you can use to manage your processes as long as you remain in the terminal from which you launched them:

  • jobs shows you the processes running in your current terminal, whether be it in the background or foreground. It also shows you a number associated with each job (different from the PID) that you can use to refer to each process:

    $ jobs 
    [1]-  Running                 cp -i -R original/dir/* backup/dir/ & 
    [2]+  Running                 find . -iname "*jpg" > backup/dir/images.txt &
    
  • fg brings a job from the background to the foreground so you can interact with it. You tell fg which process you want to bring to the foreground with a percentage symbol (%) followed by the number associated with the job that jobs gave you:

    $ fg %1 # brings the cp job to the foreground
    cp -i -R original/dir/* backup/dir/
    

    If the job was stopped (see below), fg will start it again.

  • You can stop a job in the foreground by holding down [Ctrl] and pressing [Z]. This doesn’t abort the action, it pauses it. When you start it again with (fg or bg) it will continue from where it left off…

    …Except for sleep: the time a sleep job is paused still counts once sleep is resumed. This is because sleep takes note of the clock time when it was started, not how long it was running. This means that if you run sleep 30 and pause it for more than 30 seconds, once you resume, sleep will exit immediately.

  • The bg command pushes a job to the background and resumes it again if it was paused:
    $ bg %1
    [1]+ cp -i -R original/dir/* backup/dir/ &
    

As mentioned above, you won’t be able to use any of these commands if you close the terminal from which you launched the process or if you change to another terminal, even though the process will still continue working.

To manage background processes from another terminal you need another set of tools. For example, you can tell a process to stop from a a different terminal with the kill command:

kill -s STOP <PID>

And you know the PID because that is the number Bash gave you when you started the process with &, remember? Oh! You didn’t write it down? No problem. You can get the PID of any running process with the ps (short for processes) command. So, using

ps | grep cp

will show you all the processes containing the string “cp“, including the copying job we are using for our example. It will also show you the PID:

$ ps | grep cp
14444 pts/3    00:00:13 cp

In this case, the PID is 14444. and it means you can stop the background copying with:

kill -s STOP 14444

Note that STOP here does the same thing as [Ctrl] + [Z] above, that is, it pauses the execution of the process.

To start the paused process again, you can use the CONT signal:

kill -s CONT 14444

There is a good list of many of the main signals you can send a process here. According to that, if you wanted to terminate the process, not just pause it, you could do this:

kill -s TERM 14444

If the process refuses to exit, you can force it with:

kill -s KILL 14444

This is a bit dangerous, but very useful if a process has gone crazy and is eating up all your resources.

In any case, if you are not sure you have the correct PID, add the x option to ps:

$ ps x| grep cp
14444 pts/3    D      0:14 cp -i -R original/dir/Hols_2014.mp4  
  original/dir/Hols_2015.mp4 original/dir/Hols_2016.mp4 
  original/dir/Hols_2017.mp4 original/dir/Hols_2018.mp4 backup/dir/ 

And you should be able to see what process you need.

Finally, there is nifty tool that combines ps and grep all into one:

$ pgrep cp
8 
18 
19 
26 
33 
40 
47 
54 
61 
72 
88 
96 
136 
339 
6680 
13735 
14444

Lists all the PIDs of processes that contain the string “cp“.

In this case, it isn’t very helpful, but this…

$ pgrep -lx cp
14444 cp

… is much better.

In this case, -l tells pgrep to show you the name of the process and -x tells pgrep you want an exact match for the name of the command. If you want even more details, try pgrep -ax command.

Next time

Putting an & at the end of commands has helped us explain the rather useful concept of processes working in the background and foreground and how to manage them.

One last thing before we leave: processes running in the background are what are known as daemons in UNIX/Linux parlance. So, if you had heard the term before and wondered what they were, there you go.

As usual, there are more ways to use the ampersand within a command line, many of which have nothing to do with pushing processes into the background. To see what those uses are, we’ll be back next week with more on the matter.

Read more:

Linux Tools: The Meaning of Dot

Understanding Angle Brackets in Bash

More About Angle Brackets in Bash

Learn to Use curl Command with Examples

Curl command is used to transfer files to and from a server, it supports a number of protocols like HTTP, HTTPS, FTP, FTPS, IMAP, IMAPS, DICT, FILE, GOPHER, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP etc.

Curl also supports a lot of features like proxy support, user authentication, FTP upload, HTTP post, SSL connections, cookies, file transfer pause & resume, etc. There are around 120 different options that can be used with curl & in this tutorial, we are going to discuss some important Curl commands with examples.

Download or visit a single URL

To download a file using CURL from http or ftp or any other protocol, use the following command structure

$ curl https://linuxtechlab.com

If curl can’t identify the protocol being used, it will switch to http. We can also store the output of the command to a file with ‘-o’ option or can also redirect using ‘>’.

Read more at Linux Tech Lab

Outlaw Shellbot Infects Linux Servers to Mine for Monero

The Outlaw group is conducting an active campaign which is targeting Linux systems in cryptocurrency mining attacks.

On Tuesday, the JASK Special Ops research team disclosed additional details (.PDF) of the attack wave which appears to focus on seizing infrastructure resources to support illicit Monero mining activities.

The campaign uses a refined version of Shellbot, a Trojanwhich carves a tunnel between an infected system and a command-and-control (C2) server operated by threat actors. 

The backdoor is able to collect system and personal data, terminate or run tasks and processes, download additional payloads, open remote command line shells, send stolen information to a C2, and also receive additional malware payloads from controllers. …

The threat actors target organizations through denial-of-service (DoS) and SSH brute-force techniques. If servers are compromised, their strength is added to the Outlaw botnet to carry on the campaign.

Read more at ZDNet

Exploiting systemd-journald: Part 1

This is part one in a multipart series (read Part 2 here) on exploiting two vulnerabilities in systemd-journald, which were published by Qualys on January 9th. Specifically, the vulnerabilities were:

  • a user-influenced size passed to alloca(), allowing manipulation of the stack pointer (CVE-2018-16865)
  • a heap-based memory out-of-bounds read, yielding memory disclosure (CVE-2018-16866)

The affected program, systemd-journald, is a system service that collects and stores logging data. The vulnerabilities discovered in this service allow for user-generated log data to manipulate memory such that they can take over systemd-journald, which runs as root. Exploitation of these vulnerabilities thus allow for privilege escalation to root on the target system.

As Qualys did not provide exploit code, we developed a proof-of-concept exploit for our own testing and verification. There are some interesting aspects that were not covered by Qualys’ initial publication, such as how to communicate with the affected service to reach the vulnerable component, and how to control the computed hash value that is actually used to corrupt memory. We thought it was worth sharing the technical details for the community.

As the first in our series on this topic, the objective of this post is to provide the reader with the ability to write a proof-of-concept capable of exploiting the service with Address Space Layout Randomization (ASLR) disabled. In the interest of not posting an unreadably-long blog, and also not handing sharp objects to script-kiddies before the community has had chance to patch, we are saving some elements for discussion in future posts in this series, including details on how to control the key computed hash value.

Read more at Capsule8

Red Hat Launches CodeReady Workspaces Kubernetes IDE

Red Hat announced the general availability of its CodeReady Workspaces integrated developer environment (IDE) on Feb. 5, providing users with a Kubernetes-native tool for building and collaborating on application development.

In contrast with other IDEs, Red Hat CodeReady Workspaces runs inside of a Kubernetes cluster, providing developers with integrated capabilities for cloud-native deployments. Kubernetes is an open-source container orchestration platform that enables organizations to deploy and manage application workloads. Red Hat CodeReady Workspaces is tightly integrated with the company’s OpenShift Kubernetes container platform, enabling development teams with an environment to develop and deploy container applications.

Red Hat CodeReady Workspaces is based on the open-source Eclipse Che IDE project, as well as technologies that Red Hat gained via the acquisitionof Codenvy in May 2017.

Read more at eWeek