Home Blog Page 270

Why You Need DevOps and the Cloud

DevOps and the cloud are not only two of today’s biggest tech trends, but are inextricably linked. Research from DORA’s 2018 Accelerate State of DevOps Report and Redgate’s 2019 State of Database DevOps Report outline a clear correlation between cloud and DevOps adoption, with the two working together to contribute to greater business success.

For example, in the DORA report, those companies that met all essential cloud characteristics were 23 times more likely to be in the elite group when it came to DevOps performance. Similarly, Redgate found that 43 percent of organizations that have adopted DevOps have server estates that are all or mostly cloud-based. This compares to just 12 percent of organizations that have not yet adopted DevOps or have no plans to.

Looking into the link in more detail, research shows that there are four common factors that underpin DevOps and the cloud:

Cultural Openness to Transformation

Adopting DevOps is not just a technical or process change, but one that requires a certain type of culture to be in place. 

Read more at DevOps.com

Linux Desktop News: Solus 4 Released With New Budgie Goodness

After teasing fans for several months with the 3.9999 ISO refresh, the team at Solus has delivered  “Fortitude,” a new release of the independent Linux desktop OS. And like elementary OS did with Juno, it seems to earn that major version number.

Perhaps the most notable upgrade is the appearance of Budgie 10.5, even before it lands on the slick desktop environment’s official Ubuntu flavor next month. I first experienced Budgie during my review of the InfinityCubefrom Tuxedo Computers, and I found a lot to love about it. … Read more at Forbes

Finding Files with mlocate

Learn how to locate files in this tutorial from our archives.

It’s not uncommon for a sysadmin to have to find needles buried deep inside haystacks. On a busy machine, there can be hundreds of thousands of files present on your filesystems. What do you do when you need to make sure one particular configuration file is up to date, but you can’t remember where it is located?

If you’ve used Unix-type machines for a while, then you’ve almost certainly come across the find command before. It is unquestionably exceptionally sophisticated and highly functional. Here’s an example that just searches for links inside a directory, ignoring files:

# find . -lname "*"

You can do seemingly endless things with the find command; there’s no denying that. The find command is nice and succinct when it wants to be, but it can also get complex very quickly. This is not necessarily because of the find command itself, but coupled with “xargs” you can pass it all sorts of options to tune your output, and indeed delete those files which you have found.

Location, location, frustration

There often comes a time when simplicity is the preferred route, however — especially when a testy boss is leaning over your shoulder, chatting away about how time is of the essence. And, imagine trying to vaguely guess the path of the file you’ve never seen but that your boss is certain lives somewhere on the busy /var partition.

Step forward mlocate. You may be aware of one of its close relatives: slocate, which securely (note the prepended letter s for secure) took note of the pertinent file permissions to prevent unprivileged users from seeing privileged files). Additionally, there is also the older, original locate command whence they came.

The differences between mlocate and other members of its family (according to mlocate at least) is that, when scanning your filesystems, mlocate doesn’t need to continually rescan all your filesystem(s). Instead, it merges its findings (note the prepended m for merge) with any existing file lists, making it much more performant and less heavy on system caches.

In this series of articles, we’ll look more closely at the mlocate tool (and simply refer to it as “locate” due to its popularity) and examine how to quickly and easily tune it to your heart’s content.

Compact and Bijou

If you’re anything like me unless you reuse complex commands frequently then ultimately you forget them and need to look them up.The beauty of the locate command is that you can query entire filesystems very quickly and without worrying about top-level, root, paths with a simple command using locate.

In the past, you might well have discovered that the find command can be very stubborn and cause you lots of unwelcome head-scratching. You know, a missing semicolon here or a special character not being escaped properly there. Let’s leave the complicated find command alone now, relax, and have a look into the clever little command that is locate.

You will most likely want to check that it’s on your system first by running these commands:

For Red Hat derivatives:

# yum install mlocate

For Debian derivatives:

# apt-get install mlocate

There shouldn’t be any differences between distributions, but there are almost certainly subtle differences between versions; beware.

Next, we’ll introduce a key component to the locate command, namely updatedb. As you can probably guess, this is the command which updates the locate command’s db. Hardly counterintuitive.

The db is the locate command’s file list, which I mentioned earlier. That list is held in a relatively simple and highly efficient database for performance. The updatedb runs periodically, usually at quiet times of the day, scheduled via a cron job. In Listing 1, we can see the innards of the file /etc/cron.daily/mlocate.cron (both the file’s path and its contents might possibly be distro and version dependent).

#!/bin/sh

nodevs=$(< /proc/filesystems awk '$1 == "nodev" { print $2 }')

renice +19 -p $$ >/dev/null 2>&1

ionice -c2 -n7 -p $$ >/dev/null 2>&1

/usr/bin/updatedb -f "$nodevs"

Listing 1: How the “updatedb” command is triggered every day.

As you can see, the mlocate.cron script makes careful use of the excellent nice commands in order to have as little impact as possible on system performance. I haven’t explicitly stated that this command runs at a set time every day (although if my addled memory serves, the original locate command was associated with a slow-down-your-computer run scheduled at midnight). This is thanks to the fact that on some “cron” versions delays are now introduced into overnight start times.

This is probably because of the so-called Thundering Herd of Hippos problem. Imagine lots of computers (or hungry animals) waking up at the same time to demand food (or resources) from a single or limited source. This can happen when all your hippos set their wristwatches using NTP (okay, this allegory is getting stretched too far, but bear with me). Imagine that exactly every five minutes (just as a “cron job” might) they all demand access to food or something otherwise being served.

If you don’t believe me then have a quick look at the config from — a version of cron called anacron, in Listing 2, which is the guts of the file /etc/anacrontab.

# /etc/anacrontab: configuration file for anacron

# See anacron(8) and anacrontab(5) for details.


SHELL=/bin/sh

PATH=/sbin:/bin:/usr/sbin:/usr/bin

MAILTO=root

# the maximal random delay added to the base delay of the jobs

RANDOM_DELAY=45

# the jobs will be started during the following hours only

START_HOURS_RANGE=3-22

#period in days   delay in minutes   job-identifier   command

1       5       cron.daily              nice run-parts /etc/cron.daily

7       25      cron.weekly             nice run-parts /etc/cron.weekly

@monthly 45     cron.monthly            nice run-parts /etc/cron.monthly 

Listing 2: How delays are introduced into when “cron” jobs are run.

From Listing 2, you have hopefully spotted both “RANDOM_DELAY” and the “delay in minutes” column. If this aspect of cron is new to you, then you can find out more here:

# man anacrontab

Failing that, you can introduce a delay yourself if you’d like. An excellent web page (now more than a decade old) discusses this issue in a perfectly sensible way. This website discusses using sleep to introduce a level of randomality, as seen in Listing 3.

#!/bin/sh

# Grab a random value between 0-240.
value=$RANDOM
while [ $value -gt 240 ] ; do
 value=$RANDOM
done

# Sleep for that time.
sleep $value

# Syncronize.
/usr/bin/rsync -aqzC --delete --delete-after masterhost::master /some/dir/

Listing 3: A shell script to introduce random delays before triggering an event, to avoid a Thundering Herd of Hippos.

The aim in mentioning these (potentially surprising) delays was to point you at the file /etc/crontab, or the root user’s own crontab file. If you want to change the time of when the locate command runs specifically because of disk access slowdowns, then it’s not too tricky. There may be a more graceful way of achieving this result, but you can also just move the file /etc/cron.daily/mlocate.cron somewhere else (I’ll use the /usr/local/etc directory), and as the root user add an entry into the root user’s crontab with this command and then paste the content as below:

# crontab -e

33 3 * * * /usr/local/etc/mlocate.cron

Rather than traipse through /var/log/cron and its older, rotated, versions, you can quickly tell the last time your cron.daily jobs were fired, in the case of anacron at least, with:

# ls -hal /var/spool/anacron

Next time, we’ll look at more ways to use locate, updatedb, and other tools for finding files.

Learn more about essential sysadmin skills: Download the Future Proof Your SysAdmin Career ebook now.

Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows how hackers launch sophisticated attacks to compromise servers, steal data, and crack complex passwords, so you can learn how to defend against these attacks. In the book, he also talks you through making your servers invisible, performing penetration testing, and mitigating unwelcome attacks. You can find out more about DevSecOps and Linux security via his website (http://www.devsecops.cc).

Searchable List of Certified Open Hardware Projects

In this article, and hopefully more to come, I will share interesting examples of hardware that has been certified by the Open Source Hardware Association (OSHWA).

As an introduction to this series, I’ll start with a little background.

What is open source hardware?

Open source hardware is hardware that is, well, open source. The Open Source Hardware Association maintains a formal definition of open source hardware, but fundamentally, open source hardware is about two types of freedom. The first is freedom of information: Does a user have the information required to understand, replicate, and build upon the hardware? The second is freedom from legal barriers: Will legal barriers (such as intellectual property rights) prevent a user who is trying to understand, replicate, and build upon the hardware from doing so? True open source hardware is open to everyone to do with as they see fit.

Read more at OpenSource.com

The What and the Why of the Cluster API

Throughout the evolution of software tools there exists a tension between generalization and partial specialization. A tool’s broader adoption is a form of natural selection, where its evolution is predicated on filling a given need, or role, better than its competition. This premise is imbued in the central tenets of Unix philosophy:

  • Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.
  • Expect the output of every program to become the input to another, as yet unknown, program.

The domain of configuration management tooling is rife with examples of not heeding this lesson (i.e. Terraform, Puppet, Chef, Ansible, Juju, Saltstack, etc.), where expanse in generality has given way to partial specialization of different tools, causing fragmentation of an ecosystem. This pattern has not gone unnoticed by those in the Kubernetes cluster lifecycle special interest group, or SIG, whose objective is to simplify the creation, configuration, upgrade, downgrade, and teardown of Kubernetes clusters and their components. Therefore, one of the primary design principles for any subproject that the SIG endorses is: Where possible, tools should be composable to solve a higher order set of problems.

In this post, we will outline the history and motivations behind the creation of the Cluster API as a specialized toolset to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management in the Kubernetes ecosystem. The primary function of Cluster API is not meant to supplant existing tools, but to serve as a partial specialization that can be used in a composable fashion with those tools.

Read more at VMware

Understanding GCC Warnings

Most of us appreciate when our compiler lets us know we made a mistake. Finding coding errors early lets us correct them before they embarrass us in a code review or, worse, turn into bugs that impact our customers. Besides the compulsory errors, many projects enable additional diagnostics by using the -Wall and -Wextra command-line options. For this reason, some projects even turn them into errors via -Werror as their first line of defense. But not every instance of a warning necessarily means the code is buggy. Conversely, the absence of warnings for a piece of code is no guarantee that there are no bugs lurking in it.

In this article, I would like to shed more light on trade-offs involved in the GCC implementation choices. Besides illuminating underlying issues for GCC contributors interested in implementing new warnings or improving existing ones, I hope it will help calibrate expectations for GCC users about what kinds of problems can be expected to be detected and with what efficacy. Having a better understanding of the challenges should also reduce the frustration the limitations of the available solutions can sometimes cause. (See part 2 to learn more about middle-end warnings.)

Read more at Red Hat Developers

Mageia Linux Is a Modern Throwback to the Underdog Days

I’ve been using Linux long enough to remember Linux Mandrake. I recall, at one of my first-ever Linux conventions, hanging out with the MandrakeSoft crew and being starstruck to think that they were creating a Linux distribution that was sure to bring about world domination for the open source platform.

Well, that didn’t happen. In fact, Linux Mandrake didn’t even stand the test of time. It was renamed Mandriva and rebranded. Mandriva retained popularity but eventually came to a halt in 2011. The company disbanded, sending all those star developers to other projects. Of course, rising from the ashes of Mandrake Linux came the likes of OpenMandriva, as well as another distribution called Mageia Linux.

Like OpenMandriva, Mageia Linux is a fork of Mandriva. It was created (by a group of former Mandriva employees) in 2010 and first released in 2011, so there was next to no downtime between the end of Mandriva and the release of Mageia. Since then, Mageia has existed in the shadows of bigger, more popular flavors of Linux (e.g., Ubuntu, Mint, Fedora, Elementary OS, etc.), but it’s never faltered. As of this writing, Mageia is listed as number 26 on the Distrowatch Page Hit Ranking chart and is enjoying release number 6.1.

What Sets Mageia Apart?

This question has become quite important when looking at Linux distributions, considering just how many distros there are—many of which are hard to tell apart. If you’ve seen one KDE, GNOME, or Xfce distribution, you’ve seen them all, right? Anyone who’s used Linux enough knows this statement is not even remotely true. For many distributions, though, the differences lie in the subtleties. It’s not about what you do with the desktop; it’s how you put everything together to improve the user experience.

Mageia Linux defaults to the KDE desktop and does as good a job as any other distribution at presenting KDE to users. But before you start using KDE, you should note some differences between Mageia and other distributions. To start, the installation is quite simple, but it’s slightly askew from what might expect. In similar fashion to most modern distributions, you boot up the live instance and click on the Install icon (Figure 1).

Figure 1: Installing Mageia from the Live instance.

Once you’ve launched the installation app, it’s fairly straightforward, although not quite as simple as some other versions of Linux. New users might hesitate when they are presented with the partition choice between Use free space or Custom disk partition (Remember, I’m talking about new users here). This type of user might prefer a bit simpler verbiage. Consider this: What if you were presented (at the partition section) by two choices:

  • Basic Install

  • Custom Install

The Basic install path would choose a fairly standard set of options (e.g., using the whole disk for installation and placing the bootloader in the proper/logical place). In contrast, the Custom install would allow the user to install in a non-default fashion (for dual boot, etc.) and choose where the bootloader would go and what options to apply.

The next possible confusing step (again, for new users) is the bootloader (Figure 2). For those who have installed Linux before, this option is a no-brainer. For new users, even understanding what a bootloader does can be a bit of an obstacle.

Figure 2: Configuring the Mageia bootloader.

The bootloader configuration screen also allows you to password protect GRUB2. Because of the layout of this screen, it could be confused as the root user password. It’s not. If you don’t want to password protect GRUB2, leave this blank. In the final installation screen (Figure 3), you can set any bootloader options you might want. Once again, we find a window that could cause confusion with new users.

Figure 3: Advanced bootloader options can be configured here.

Click Finish and the installation will complete. You might have noticed the absence of user configuration or root user password options. With the first stage of the installation complete, you reboot the machine, remove the installer media, and (when the machine reboots) you’ll then be prompted to configure both the root user password and a standard user account (Figure 4).

Figure 4: Configuring your users.

And that’s all there is to the Mageia installation.

Welcome to Mageia

Once you log into Mageia, you’ll be greeted by something every Linux distribution should use—a welcome app (Figure 5).

Figure 5: The Mageia welcome app is a new user’s best friend.

From this welcome app, you can get information about the distribution, get help, and join communities. The importance of having such an approach to greet users at login cannot be overstated. When new users log into Linux for the first time, they want to know that help is available, should they need it. Mageia Linux has done an outstanding job with this feature. Granted, all this app does is serve as a means to point users to various websites, but it’s important information for users to have at the ready.

Beyond the welcome app, the Mageia Control Center (Figure 6) also helps Mageia stand out. This one-stop-shop is where users can take care of installing/updating software, configuring media sources for installation, configure update frequency, manage/configure hardware, configure network devices (e.g., VPNs, proxies, and more), configure system services, view logs, open an administrator console, create network shares, and so much more. This is as close to the openSUSE YaST tool as you’ll find (without using either SUSE or openSUSE).

Figure 6: The Mageia Control Center is an outstanding system management tool.

Beyond those two tools, you’ll find everything else you need to work. Mageia Linux comes with the likes of LibreOffice, Firefox, KMail, GIMP, Clementine, VLC, and more. Out of the box, you’d be hard pressed to find another tool you need to install to get your work done. It’s that complete a distribution.

Target Audience

Figuring out the Mageia Linux target audience is a tough question to answer. If new users can get past the somewhat confusing installation (which isn’t really that challenging, just slightly misleading), using Mageia Linux is a dream.

The slick, barely modified KDE desktop, combined with the welcome app and control center make for a desktop Linux that will let users of all skill levels feel perfectly at home. If the developers could tighten up the verbiage on the installation, Mageia Linux could be one of the greatest new user Linux experiences available. Until then, new users should make sure they understand what they’re getting into with the installation portion of this take on the Linux platform.

Chasing Linux Kernel Archives

Kernel development is truly impossible to keep track of. The main mailing list alone is vast beyond belief. Then there are all the side lists and IRC channels, not to mention all the corporate mailing lists dedicated to kernel development that never see the light of day. In some ways, kernel development has become fundamentally mysterious.

Once in a while, some lunatic decides to try to reach back into the past and study as much of the corpus of kernel discussion as he or she can find. One such person is Joey Pabalinas, who recently wanted to gather everything together in Maildir format, so he could do searches, calculate statistics, generate pseudo-hacker AI bots and whatnot.

He couldn’t find any existing giant corpus, so he tried to create his own by piecing together mail archived on various sites. It turned out to be more than a million separate files, which was too much to host on either GitHub or GitLab

Read more at Linux Journal

ONS Evolution: Cloud, Edge, and Technical Content for Carriers and Enterprise

The first Open Networking Summit was held in October 2011 at Stanford University and described as “a premier event about OpenFlow and Software-Defined Networking (SDN)”. Here we are seven and half years later and I’m constantly amazed at both how far we’ve come since then, and at how quickly a traditionally slow-moving industry like telecommunications is embracing change and innovation powered by open source. Coming out of the ONS Summit in Amsterdam last fall, Network World described open source networking as the “new norm,” and indeed, open platforms have become de-facto standards in networking.  

Like the technology, ONS as an event is constantly evolving to meet industry needs and is designed to help you take advantage of this revolution in networking. The theme of this year’s event is “Enabling Collaborative Development & Innovation” and we’re doing this by exploring collaborative development and innovation across the ecosystem for enterprises, service providers and cloud providers in key areas like SDN, NFV, VNF, CNF/Cloud Native Networking, Orchestration, Automation of Cloud, Core Network, Edge, Access, IoT services, and more.

A unique aspect of ONS is that it facilitates deep technical discussions in parallel with exciting keynotes, industry, and business discussions in an integrated program. The latest innovations from the networking project communities including LF Networking (ONAP, OpenDaylight, OPNFV, Tungsten Fabric) are well represented in the program, and in features and add-ons such as the LFN Unconference Track and LFN Networking Demos. A variety of event experiences ensure that attendees have ample opportunities to meet and engage with each other in sessions, the expo hall, and during social events.

New this year is a track structure built to cover the key topics in depth to meet the needs of both CIOs/CTO/architects and developers, sysadmins, NetOps and DevOps teams:

The ONS Schedule is now live — find the sessions and tutorials that will help you learn how to participate in the open source communities and ecosystems that will make a difference in your networking career. And if you need help convincing your boss, this will help you make the case.

The standard price expires March 17th so hurry up and register today! Be sure to check out the Day Passes and Hall Passes available as well.

I hope to see you there!

This article originally appeared at the Linux Foundation.

Tutorial: Tap the Hidden Power of Your Bash Command History

Last month I wrote about combining a series of Unix commands using pipes. But there are times where you don’t even need pipes to turn a carefully-chosen series of commands into a powerful and convenient home-grown utility. …

The echo command repeats whatever text is entered after it, for example. I’d just never found it particularly useful, since it always seemed to be more trouble than it’s worth. Sure, echo was handy for adding decorations to output.

echo "--------------------------" ; date ; echo "--------------------------"
--------------------------
Thu Feb 28 01:25:46 UTC 2019
--------------------------

But if you have to type in all those decorations in the first place, you’re not really saving any time.

What I’d really wanted (instead of echo) was a command to drop me back into that one deep-down subdirectory where I was doing most of my work. Something that was shorter than

cd ~/subdirectory/subdirectory/subdirectory/subdirectory/subdirectory

Yes, there’s a command that lets you change back to your last-used directory.

cd 

Read more at The New Stack