Home Blog Page 277

Networking Tool Comics!

I LOVE computer networking (it’s what I spent a big chunk of the last few years at work doing), but getting started with all the tools was originally a little tricky! For example – what if you have the IP address of a server and you want to make a https connection to it and check that it has a valid certificate? But you haven’t changed DNS to resolve to that server yet (because you don’t know if it works!) so you need to use the IP address? If you do curl https://1.2.3.4/, curl will tell you that the certificate isn’t valid (because it’s not valid for 1.2.3.4). So you need to know to do curl https://jvns.ca --resolve jvns.ca:443:104.198.14.52.

I know how to use curl --resolve because my coworker told me how. And I learned that to find out when a cert expires you can do openssl x509 -in YOURCERT.pem -text -noout the same way. So the goal with this zine is basically to be “your very helpful coworker who gives you tips about how to use networking tools” in case you don’t have that person.

Read more at Julia Evans

Make Container Management Easy With Cockpit

Learn how to use Cockpit for Linux administration tasks in this tutorial from our archives.

If you administer a Linux server, you’ve probably been in search of a solid administration tool. That quest has probably taken you to such software as Webmin and cPanel. But if you’re looking for an easy way to manage a Linux server that also includes Docker, one tool stands above the rest for that particular purpose: Cockpit.

Why Cockpit? Because it includes the ability to handle administrative tasks such as:

  • Connect and manage multiple machines

  • Manage containers via Docker

  • Interact with a Kubernetes or Openshift clusters

  • Modify network settings

  • Manage user accounts

  • Access a web-based shell

  • View system performance information by way of helpful graphs

  • View system services and log files

Cockpit can be installed on Debian, Red Hat, CentOS, Arch Linux, and Ubuntu. Here, I will focus on installing the system on a Ubuntu 16.04 server that already includes Docker.

Out of the list of features, the one that stands out is the container management. Why? Because it make installing and managing containers incredibly simple. In fact, you might be hard-pressed to find a better container management solution.
With that said, let’s install this solution and see just how easy it is to use.

Installation

As I mentioned earlier, I will be installing Cockpit on an instance of Ubuntu 16.04, with Docker already running. The steps for installation are quite simple. The first thing you must do is log into your Ubuntu server. Next you must add the necessary repository with the command:

sudo add-apt-repository ppa:cockpit-project/cockpit

When prompted, hit the Enter key on your keyboard and wait for the prompt to return. Once you are back at your bash prompt, update apt with the command:

sudo apt-get get update

Install Cockpit by issuing the command:

sudo apt-get -y install cockpit cockpit-docker

After the installation completes, it is necessary to start the Cockpit service and then enable it so it auto-starts at boot. To do this, issue the following two commands:

sudo systemctl start cockpit
sudo systemctl enable cockpit

That’s all there is to the installation.

Logging into Cockpit

To gain access to the Cockpit web interface, point a browser (that happens to be on the same network as the Cockpit server) to http://IP_OF_SERVER:9090, and you will be presented with a login screen (Figure 1).

Figure 1: The Cockpit login screen.

A word of warning with using Cockpit and Ubuntu. Many of the tasks that can be undertaken with Cockpit require administrative access. If you log in with a standard user, you won’t be able to work with some of the tools like Docker. To get around that, you can enable the root user on Ubuntu. This isn’t always a good idea. By enabling the root account, you are bypassing the security system that has been in place for years. However, for the purpose of this article, I will enable the root user with the following two commands:

sudo passwd root

sudo passwd -u root 

NOTE: Make sure you give the root account a very challenging password.

Should you want to revert this change, you only need issue the command:

sudo passwd -l root

With other distributions, such as CentOS and Red Hat, you will be able to log into Cockpit with the username root and the root password, without having to go through the extra hopes as described above.
If you’re hesitant to enable the root user, you can always pull down the images, from the server terminal (using the command docker pull IMAGE_NAME where IMAGE_NAME is the image you want to pull). That would add the image to your docker server, which can then be managed via a regular user. The only caveat to this is that the regular user must be added to the Docker group with the command:

sudo usermod -aG docker USER

Where USER is the actual username to be added to the group. Once you’ve done that, log out, log back in, and then restart Docker with the command:

sudo service docker restart

Now the regular user can start and stop the added Docker images/containers without having to enable the root user. The only caveat is that user will not be able to add new images via the Cockpit interface.

Using Cockpit

Once you’ve logged in, you will be treated to the Cockpit main window (Figure 2).

Figure 2: The Cockpit main window.

You can go through each of the sections to check on the status of the server, work with users, etc., but we want to go right to the containers. Click on the Containers section to display the current running contains as well as the available images (Figure 3).

Figure 3: Managing containers is incredibly simple with Cockpit.

To start an image, simply locate the image and click the associated start button. From the resulting popup window (Figure 4), you can check all the information about the image (and adjust as needed), before clicking the Run button.

Figure 4: Running a Docker image with the help of Cockpit.

Once the image is running, you can check its status by clicking on the entry under the Containers section and then Stop, Restart, or Delete the instance. You can also click Change resource limits and then adjust either the Memory limit and/or CPU priority.

Adding new images

Say you have logged on as the root user. If so, you can add new images with the help of the Cockpit GUI. From the Containers section, click the Get new image button and then, in the resulting window, search for the image you want to add. Say you want to add the latest official build of Centos. Type centos in the search field and then, once the search results populate, select the official listing and click Download (Figure 5).

Figure 5: Adding the latest build of the official Centos images to Docker, via Cockpit.

Once the image has downloaded, it will be available to Docker and can be run via Cockpit.

As simple as it gets

Managing Docker doesn’t get any easier. Yes, there is a caveat when working with Cockpit on Ubuntu, but if it’s your only option, there are ways to make it work. With the help of Cockpit, you can not only easily manage Docker images, you can do so from any web browser that has access to your Linux server. Enjoy your newfound Docker ease.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Recursive Programming

Despite often being introduced early-on in most ventures into programming, the concept of recursion can seem strange and potentially off-putting upon first encountering it. It seems almost paradoxical: how can we find a solution to a problem using the solution to the same problem?

Believe it or not, once we get to grips with it, some problems are easier to solve using recursion than they are to solve using iteration. Sometimes recursion is more efficient, and sometimes it is more readable; sometimes recursion is neither faster nor more readable, but quicker to implement. There are data-structures, such as trees, that are well-suited to recursive algorithms. There are even some programming languages with no concept of a loop — purely functional languages such as Haskell depend entirely on recursion for iterative problem solving. The point is simple: You don’t have to understand recursion to be a programmer, but you do have to understand recursion to start to become a good programmer. In fact, I’d go as far as to say that understanding recursion is part of being a good problem solver, all programming aside!

The Essence of Recursion

In general, with recursion we try to break down a more complex problem into a simple step towards the solution and a remainder that is an easier version of the same problem. We can then repeat this process, taking the same step towards the solution each time, until we reach a version of our problem with a very simple solution (referred to as a base case). The simple solution to our base case aggregated with the steps we took to get there then form a solution to our original problem.

Read more at Towards Data Science

The Monty Hall Problem

The original and most simple scenario of the Monty Hall problem is this: You are in a prize contest and in front of you there are three doors (A, B and C). Behind one of the doors is a prize (Car), while behind others is a loss (Goat). You first choose a door (let’s say door A). The contest host then opens another door behind which is a goat (let’s say door B), and then he ask you will you stay behind your original choice or will you switch the door. The question behind this is what is the better strategy?

The basis of the answer lies in related and unrelated events. The most common answer is that it doesn’t matter which strategy you choose because it is 50/50 chance – but it is not. The 50/50 assumption is based on the idea that the first choice (one of three doors) and the second choice (stay or switch door) are unrelated events, like flipping a coin two times. But in reality, those are related events, and the second event depends on the first event.

At the first step, when you choose one of three doors, the probability that you picked the right door is 33%, or in other words, there is 66,67% that you are on the wrong door. The fact that that in the second step you are given a choice between your door and the other one doesn’t change the fact that you are most likely starting with the wrong door. Therefore, it is better to switch door in the second step.

Read more at There’s Something About R

New Ports Bring Linux to Arm Laptops, Android to the Pi

Like life itself, software wants to be free. In our increasingly open source era, software can more easily disperse into new ecosystems. From open source hackers fearlessly planting the Linux flag on the Sony Playstation back in the aughts to standard Linux apps appearing on Chromebooks and on Android-based Galaxy smartphones (Samsung’s DeX), Linux continues to break down barriers.

The latest Linux-related ports include an AArch64-Laptops project that enables owners of Windows-equipped Arm laptops and tablets to load Ubuntu. There’s also a Kickstarter project to develop a Raspberry Pi friendly version of Google’s low-end Android 9 Pi Go stack. Even Windows is spreading its wings. A third-party project has released a WoA installer that enables a full Windows 10 image to run on the Pi.

Ubuntu to Arm laptops

The practice of replacing Windows with Linux on Intel-based computers has been around for decades, but the arrival of Arm-based laptops has complicated matters. Last year, Microsoft partnered with Qualcomm and to release the lightweight Windows 10 S on the Asus NovaGo convertible laptop and the HP Envy x2 and Lenovo Miix 630 2-in-1 tablets, all powered by a Snapdragon 835 SoC.

Reviews have been mixed, with praise for the longer battery life, but criticism about sluggish performance. Since the octa-core, 10nm fabricated Snapdragon 835 is designed to run on the Linux-based Android — it also supports embedded Linux — Linux hackers naturally decided that they could do better.

As reported by Phoronix, AArch64-Laptops has posted Ubuntu 18.04 LTS images for all three of the above systems. As noted by Liliputing, the early release lacks support for WiFi, on-board storage, or hardware-accelerated graphics, and the touchpad doesn’t work on the Asus NovaGo.

The WiFi and storage issues should be solved in the coming months and accelerated graphics should be theoretically possible thanks to the open source Freedreno GPU driver project, says Phoronix. It’s unclear if AArch64-Laptops can whip up Ubuntu builds for more powerful Arm Linux systems like the Snapdragon 850 based Samsung Galaxy Book 2 and Lenovo Yoga C630.

Liliputing notes that Arm Linux lovers can also try out the Linux-driven, Rockchip RK3399 based Pinebook laptop. Later this year, Pine64 will release a consumer-grade Pinebook Pro.

Android Go to Raspberry Pi

If you like a double helping of pie, have we got a Kickstarter project for you. As reported by Geeky Gadgets, an independent group called RaspberryPi DevTeam has launched a Kickstarter campaign to develop a version of Google’s new Android 9 Pie Go stack for entry-level smartphones that can to run on the Raspberry Pi 3.

Assuming the campaign meets its modest $3,382 goal by April 10, there are plans to deliver a usable build by the end of the year. Pledges range from 1 to 499 Euros.

The project will use AOSP-based code from Android 9 Pie Go, which was released last August. Go is designed for low-end phones with only 1GB RAM.

RaspberryPi DevTeam was motivated to launch the project because current Android stacks for the Raspberry Pi “normally have bugs, are unstable and run slow,” says the group. That has largely been true since hackers began attempting the feat four years ago with the quad-core, Cortex-A7 Raspberry Pi 2. Early attempts have struggled to give Android its due on 1GB RAM SBC, even with the RPi 3B and 3+.

The real-time focused RTAndroid has had the most success, and there have been other efforts like the unofficial, Android 7.1.2 based LineageOS 14.1 for the RPi 3. Last year, an RTAndroid-based, industrial focused emteria.OS stack arrived with more impressive performance.

A MagPi hands-on last summer was impressed with the stack, which it called “the first proper Android release running on a Raspberry Pi 3B+.” MagPi continues: “Finally there’s a proper way to install full Android on your Raspberry Pi.”

Available in free evaluation (registration required) and commercial versions, emteria.OS uses F-Droid as an open source stand-in for Google Play. The MagPi hands-on runs through an installation of Netflix and notes the availability of apps including NewPipe (YouTube), Face Slim (Facebook), and Terminal Emulator.

All these solutions should find it easier to run on next year’s Raspberry Pi 4. Its SoC will move from the current 40nm process to something larger than 7nm, but no larger than 28nm, according to RPi Trading CEO Eben Upton in a Feb. 11 Tom’s Hardware post. The SBC will have “more RAM, a faster processor, and faster I/O,” but will be the same size and price as the RPi 3B+, says the story. Interestingly, it was former Google CEO Eric Schmidt who convinced Upton and his crew to retain the $35 price for the RPi 2. The lesson seems to have stuck.

Windows 10 on RPi 3

As far back as the Raspberry Pi 2, Microsoft announced it would support the platform with its slimmed down Windows 10 IoT, which works better on the new 64-bit RPi 3 models. But why use a crippled version of Windows for low-power IoT when you could use Raspbian?

The full Windows 10 should draw more interest, and that’s what’s promised by the WOA-Project with its new WoA-Installer for the RPi 3 or 3B+. According to Windows Latest, the open source WoA (Windows on Arm) Installer was announced in January following an earlier WoA release for the Lumia 950 phones.

The WoA Installer lets you run Windows 10 Arm 64 on the Pi but comes with no performance promises. The GitHub page notes: “WoA Installer needs a set of binaries, AKA the Core Package, to do its job. These binaries are not not mine and are bundled and offered just for convenience…” Good luck!

How Much Memory Is Installed and Being Used on Your Linux Systems?

There are numerous ways to get information on the memory installed on Linux systems and view how much of that memory is being used. Some commands provide an overwhelming amount of detail, while others provide succinct, though not necessarily easy-to-digest, answers. In this post, we’ll look at some of the more useful tools for checking on memory and its usage.

Before we get into the details, however, let’s review a few details. Physical memory and virtual memory are not the same. The latter includes disk space that configured to be used as swap. Swap may include partitions set aside for this usage or files that are created to add to the available swap space when creating a new partition may not be practical. Some Linux commands provide information on both.

Swap expands memory by providing disk space that can be used to house inactive pages in memory that are moved to disk when physical memory fills up.

Read more at Network World

Ampersands and File Descriptors in Bash

In our quest to examine all the clutter (&, |, ;, >, <, {, [, (, ), ], }, etc.) that is peppered throughout most chained Bash commands, we have been taking a closer look at the ampersand symbol (&).

Last time, we saw how you can use & to push processes that may take a long time to complete into the background. But, the &, in combination with angle brackets, can also be used to pipe output and input elsewhere.

In the previous tutorials on angle brackets, you saw how to use > like this:

ls > list.txt

to pipe the output from ls to the list.txt file.

Now we see that this is really shorthand for

ls 1> list.txt

And that 1, in this context, is a file descriptor that points to the standard output (stdout).

In a similar fashion 2 points to standard error (stderr), and in the following command:

ls 2> error.log

all error messages are piped to the error.log file.

To recap: 1> is the standard output (stdout) and 2> the standard error output (stderr).

There is a third standard file descriptor, 0<, the standard input (stdin). You can see it is an input because the arrow (<) is pointing into the 0, while for 1 and 2, the arrows (>) are pointing outwards.

What are the standard file descriptors good for?

If you are following this series in order, you have already used the standard output (1>) several times in its shorthand form: >.

Things like stderr (2) are also handy when, for example, you know that your command is going to throw an error, but what Bash informs you of is not useful and you don’t need to see it. If you want to make a directory in your home/ directory, for example:

mkdir newdir

and if newdir/ already exists, mkdir will show an error. But why would you care? (Ok, there some circumstances in which you may care, but not always.) At the end of the day, newdir will be there one way or another for you to fill up with stuff. You can supress the error message by pushing it into the void, which is /dev/null:

mkdir newdir 2> /dev/null

This is not just a matter of “let’s not show ugly and irrelevant error messages because they are annoying,” as there may be circumstances in which an error message may cause a cascade of errors elsewhere. Say, for example, you want to find all the .service files under /etc. You could do this:

find /etc -iname "*.service"

But it turns out that on most systems, many of the lines spat out by find show errors because a regular user does not have read access rights to some of the folders under /etc. It makes reading the correct output cumbersome and, if find is part of a larger script, it could cause the next command in line to bork.

Instead, you can do this:

find /etc -iname "*.service" 2>  /dev/null

And you get only the results you are looking for.

A Primer on File Descriptors

There are some caveats to having separate file descriptors for stdout and stderr, though. If you want to store the output in a file, doing this:

find /etc -iname "*.service" 1> services.txt

would work fine because 1> means “send standard output, and only standard output (NOT standard error) somewhere“.

But herein lies a problem: what if you *do* want to keep a record within the file of the errors along with the non-erroneous results? The instruction above won’t do that because it ONLY writes the correct results from find, and

find /etc -iname "*.service" 2> services.txt

will ONLY write the errors.

How do we get both? Try the following command:

find /etc -iname "*.service" &> services.txt

… and say hello to & again!

We have been saying all along that stdin (0), stdout (1), and stderr (2) are file descriptors. A file descriptor is a special construct that points to a channel to a file, either for reading, or writing, or both. This comes from the old UNIX philosophy of treating everything as a file. Want to write to a device? Treat it as a file. Want to write to a socket and send data over a network? Treat it as a file. Want to read from and write to a file? Well, obviously, treat it as a file.

So, when managing where the output and errors from a command goes, treat the destination as a file. Hence, when you open them to read and write to them, they all get file descriptors.

This has interesting effects. You can, for example, pipe contents from one file descriptor to another:

find /etc -iname "*.service" 1> services.txt 2>&1

This pipes stderr to stdout and stdout is piped to a file, services.txt.

And there it is again: the &, signaling to Bash that 1 is the destination file descriptor.

Another thing with the standard file descriptors is that, when you pipe from one to another, the order in which you do this is a bit counterintuitive. Take the command above, for example. It looks like it has been written the wrong way around. You may be reading it like this: “pipe the output to a file and then pipe errors to the standard output.” It would seem the error output comes to late and is sent when 1 is already done.

But that is not how file descriptors work. A file descriptor is not a placeholder for the file, but for the input and/or output channel to the file. In this case, when you do 1> services.txt, you are saying “open a write channel to services.txt and leave it open“. 1 is the name of the channel you are going to use, and it remains open until the end of the line.

If you still think it is the wrong way around, try this:

find /etc -iname "*.service" 2>&1 1>services.txt

And notice how it doesn’t work; notice how errors get piped to the terminal and only the non-erroneous output (that is stdout) gets pushed to services.txt.

That is because Bash processes every result from find from left to right. Think about it like this: when Bash gets to 2>&1, stdout (1) is still a channel that points to the terminal. If the result that find feeds Bash contains an error, it is popped into 2, transferred to 1, and, away it goes, off to the terminal!

Then at the end of the command, Bash sees you want to open stdout as a channel to the services.txt file. If no error has occurred, the result goes through 1 into the file.

By contrast, in

find /etc -iname "*.service" 1>services.txt 2>&1

1 is pointing at services.txt right from the beginning, so anything that pops into 2 gets piped through 1, which is already pointing to the final resting place in services.txt, and that is why it works.

In any case, as mentioned above &> is shorthand for “both standard output and standard error“, that is, 2>&1.

This is probably all a bit much, but don’t worry about it. Re-routing file descriptors here and there is commonplace in Bash command lines and scripts. And, you’ll be learning more about file descriptors as we progress through this series. See you next week!

Runc and CVE-2019-5736

This morning a container escape vulnerability in runc was announced. We wanted to provide some guidance to Kubernetes users to ensure everyone is safe and secure.

What Is Runc?

Very briefly, runc is the low-level tool which does the heavy lifting of spawning a Linux container. Other tools like Docker, Containerd, and CRI-O sit on top of runc to deal with things like data formatting and serialization, but runc is at the heart of all of these systems.

Kubernetes in turn sits on top of those tools, and so while no part of Kubernetes itself is vulnerable, most Kubernetes installations are using runc under the hood.

What Is The Vulnerability?

While full details are still embargoed to give people time to patch, the rough version is that when running a process as root (UID 0) inside a container, that process can exploit a bug in runc to gain root privileges on the host running the container. This then allows them unlimited access to the server as well as any other containers on that server.

If the process inside the container is either trusted (something you know is not hostile) or is not running as UID 0, then the vulnerability does not apply. It can also be prevented by SELinux, if an appropriate policy has been applied. RedHat Enterprise Linux and CentOS both include appropriate SELinux permissions with their packages and so are believed to be unaffected if SELinux is enabled.

The most common source of risk is attacker-controller container images, such as unvetted images from public repositories.

Read more at Kubernetes blog

How to Use SSH to Proxy Through a Linux Jump Host

Secure Shell (SSH) includes a number of tricks up its sleeve. One particular trick you may not know about is the ability to use a jump host. A jump host is used as an intermediate hop between your source machine and your target destination. In other words, you can access X from Y using a gateway.

There are many reasons to use a jump server. For example, Jump servers are often placed between a secure zone and a DMZ. These jump servers provide for the transparent management of devices within the DMZ, as well as a single point of entry. Regardless of why you might want to use a jump server, do know that it must be a hardened machine (so don’t just depend upon an unhardened Linux machine to serve this purpose). By using a machine that hasn’t been hardened, you’re just as insecure as if you weren’t using the jump.

But how can you set this up? I’m going to show you how to create a simple jump with the following details (Your set up will be defined by your network.):

Read more at Tech Republic

Assess USB Performance While Exploring Storage Caching

The team here at the Dragon Propulsion Laboratory has kept busy building multiple Linux clusters as of late [1]. Some of the designs rely on spinning disks or SSD drives, whereas others use low-cost USB storage or even SD cards as boot media. In the process, I was hastily reminded of the limits of external storage media: not all flash is created equal, and in some crucial ways external drives, SD cards, and USB keys can be fundamentally different.

Turtles All the Way Down

Mass storage performance lags that of working memory in the Von Neumann architecture [2], with the need to persist data leading to the rise of caches at multiple levels in the memory hierarchy. An access speed gap three orders of magnitude between levels makes this design decision essentially inevitable where performance is at all a concern. (See Brendan Gregg’s table of computer speed in human time [3].) The operating system itself provides the most visible manifestation of this design in Linux: Any RAM not allocated to a running program is used by the kernel to cache the reads from and buffer the writes to the storage subsystem [4], leading to the often repeated quip that there is really no such thing as “free memory” in a Linux system.

An easy way to observe the operating system (OS) buffering a write operation is to write the right amount of data to a disk in a system with lots of RAM, as shown in Figure 1, in which a rather improbable half a gigabyte worth of zeros is being written to a generic, low-cost USB key in half a second, but then experiences a 30-second delay when forcing the system to sync [5] to disk. 

Read more at ADMIN magazine