Home Blog Page 267

Kubernetes 1.14: Production-level support for Windows Nodes, Kubectl Updates, Persistent Local Volumes GA

We’re pleased to announce the delivery of Kubernetes 1.14, our first release of 2019!

Kubernetes 1.14 consists of 31 enhancements: 10 moving to stable, 12 in beta, and 7 net new. The main themes of this release are extensibility and supporting more workloads on Kubernetes with three major features moving to general availability, and an important security feature moving to beta.

More enhancements graduated to stable in this release than any prior Kubernetes release. This represents an important milestone for users and operators in terms of setting support expectations. In addition, there are notable Pod and RBAC enhancements in this release, which are discussed in the “additional notable features” section below.

Let’s dive into the key features of this release:

Production-level Support for Windows Nodes

Up until now Windows Node support in Kubernetes has been in beta, allowing many users to experiment and see the value of Kubernetes for Windows containers. Kubernetes now officially supports adding Windows nodes as worker nodes and scheduling Windows containers, enabling a vast ecosystem of Windows applications to leverage the power of our platform. Enterprises with investments in Windows-based applications and Linux-based applications don’t have to look for separate orchestrators to manage their workloads, leading to increased operational efficiencies across their deployments, regardless of operating system.

Read more at Kubernetes.io

Can Better Task Stealing Make Linux Faster?

Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements.

Load balancing via scalable task stealing

The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10’s to 100’s of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused.

I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all.

Results

Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats:

  • %find – percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap.
  • steal – number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time.

​​CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel:

Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate.

The code

As of this writing, this work is not yet upstream, but the latest patch series is at https://lkml.org/lkml/2018/12/6/1253. If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using


  # grep -q STEAL /sys/kernel/debug/sched_features && echo Yes
  Yes

If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in https://lkml.org/lkml/2018/12/6/1250 .However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger).

Future work

After the basic stealing algorithm is pushed upstream, I am considering the following enhancements:

  • If stealing within the last-level cache does not find a candidate, steal across LLC’s and NUMA nodes.
  • Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues.
  • Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported.
  • Maintain a bitmap to identify idle cores and idle CPUs, for push balancing.

This article originally appeared at Oracle Developers Blog.

Linux Release Roundup: Applications and Distros Released This Week

This is a continually updated article that lists various Linux distribution and Linux-related application releases of the week.

At It’s FOSS, we try to provide you with all the major happenings of the Linux and Open Source world. But it’s not always possible to cover all the news, specially the minor releases of a popular application or a distribution.

Hence, I have created this page, which I’ll be continually updating with the links and short snippets of the new releases of the current week. Eventually, I’ll remove releases older than 2 weeks from the page.

Read more at It’s FOSS

How to Install NTP Server and Client(s) on Ubuntu 18.04 LTS

NTP or Network Time Protocol is a protocol that is used to synchronize all system clocks in a network to use the same time. When we use the term NTP, we are referring to the protocol itself and also the client and server programs running on the networked computers. NTP belongs to the traditional TCP/IP protocol suite and can easily be classified as one of its oldest parts.

When you are initially setting up the clock, it takes six exchanges within 5 to 10 minutes before the clock is set up. Once the clocks in a network are synchronized, the client(s) update their clocks with the server once every 10 minutes. This is usually done through a single exchange of message(transaction). These transactions use port number 123 of your system.

In this article, we will describe a step-by-step procedure on how to:

  • Install and configure the NTP server on a Ubuntu machine.
  • Configure the NTP Client to be time synced with the server.

We have run the commands and procedures mentioned in this article on a Ubuntu 18.04 LTS system.

Read more at Vitux

Linux Foundation Welcomes LVFS Project

The Linux Foundation welcomes the Linux Vendor Firmware Service (LVFS) as a new project. LVFS is a secure website that allows hardware vendors to upload firmware updates. It’s used by all major Linux distributions to provide metadata for clients, such as fwupdmgr, GNOME Software and KDE Discover.

To learn more about the project’s history and goals, we talked with Richard Hughes, upstream maintainer of LVFS and Principal Software Engineer at Red Hat.

Linux Foundation: Briefly, what is Linux Vendor Firmware Service (LVFS)? Can you give us a little background on the project?

Richard Hughes:  A long time ago I wanted to design and build an OpenHardware colorimeter (a device used to measure the exact colors on screen) as a weekend hobby. To update the devices, I also built a command line tool and later a GUI tool to update just the ColorHug firmware, downloading a list of versions as an XML file from my personal homepage. I got lots of good design advice from Lapo Calamandrei for the GUI (a designer from GNOME), but we concluded it was bad having to reinvent the wheel and build a new UI for each open hardware device.

A few months prior, Microsoft made UEFI UpdateCapsule a requirement for the “Windows 10 sticker.” This meant vendors had to start supporting system firmware updates via a standardized format that could be used from any OS. Peter Jones (a colleague at Red Hat) did the hard work of working out how to deploy these capsules on Linux successfully. The capsules themselves are just binary executables, so what was needed was the same type of metadata that I was generating for ColorHug, but in a generic format.

Some vendors like Dell were already generating some kinds of metadata and trying to support Linux. A lot of the tools for applying the firmware updates were OEM-specific, usually only available for Windows, and sometimes made dubious security choices. By using the same container file format as proposed by Microsoft (the reason we use a cabinet archive, rather than .tar or .zip) vendors could build one deliverable that worked on Windows and Linux.

Dell has been a supporter ever since the early website prototypes. Mario Limonciello (Senior Principal Software Development Engineer from Dell) has worked with me on both the lvfs-website project and fwupd in equal measure, and I consider him a co-maintainer of both projects. Now the LVFS supports firmware updates on 72 different devices, from about 30 vendors, and has supplied over 5 million firmware updates to Linux clients.

The fwupd project is still growing, supporting more hardware with every release. The LVFS continues to grow, adding important features like 2 factor authentication, OAuth and various other tools designed to get high-quality metadata from the OEMs and integrate it into ODM pipelines. The LVFS is currently supported by donations, which funds the two server instances and some of the test hardware I use when helping vendors.

Hardware vendors upload redistributable firmware to the LVFS site packaged up in an industry-standard .cab archive along with a Linux-specific metadata file. The fwupd daemon allows session software to update device firmware on the local machine. Although fwupd and the LVFS were designed for desktops, both are also usable on phones, tablets, IoT devices and headless servers.

The LVFS and fwupd daemon are open source projects with contributions from dozens of people from many different companies. Plugins allow many different update protocols to be supported.

Linux Foundation: What are some of the goals of the LVFS project?

Richard Hughes: The short-term goal was to get 95% of updatable consumer hardware supported. With the recent addition of HP that’s now a realistic target, although you have to qualify the 95% with “new consumer non-enterprise hardware sold this year” as quite a few vendors will only support hardware no older than a few years at most, and most still charge for firmware updates for enterprise hardware. My long-term goal is for the LVFS to be seen like a boring, critical part of infrastructure in Linux, much like you’d consider an NTP server for accurate time, or a PGP keyserver for trust.

With the recent Spectre and Meltdown issues hitting the industry, firmware updates are no longer seen as something that just adds support for new hardware or fixes the occasional hardware issue. Now the EFI BIOS is a fully fledged operating system with networking capabilities, companies and government agencies are realizing that firmware updates are as important as kernel updates, and many are now writing in “must support LVFS” as part of any purchasing policy.

Linux Foundation: How can the community learn more and get involved?

Richard Hughes: The LVFS is actually just a Python Flask project, and it’s all free code. If there’s a requirement that you need supporting, either as an OEM, ODM, company, or end user we’re really pleased to talk about things either privately in email, or as an issue or pull request on GitHub. If a vendor wants a custom flashing protocol added to fwupd, the same rules apply, and we’re happy to help.

Quite a few vendors are testing the LVFS and fwupd in private, and we agree to only do the public announcement when everything is working and the legal and PR teams gives the thumbs up. From a user point of view, we certainly need to tell hardware vendors to support fwupd and the LVFS, before the devices are sitting on shelves.

We also have a low-volume LVFS announce mailing list, or a user fwupd mailing list for general questions. Quite a few people are helping to spread the word, by giving talks at local LUGs or conferences, or presenting information in meetings or elsewhere. I’m happy to help with that, too.

This article originally appeared at Linux Foundation

An Introduction to Linux Virtual Interfaces: Tunnels

Linux has supported many kinds of tunnels, but new users may be confused by their differences and unsure which one is best suited for a given use case. In this article, I will give a brief introduction for commonly used tunnel interfaces in the Linux kernel. There is no code analysis, only a brief introduction to the interfaces and their usage on Linux. Anyone with a network background might be interested in this information. A list of tunnel interfaces, as well as help on specific tunnel configuration, can be obtained by issuing the iproute2 command ip link help.

This post covers the following frequently used interfaces:

After reading this article, you will know what these interfaces are, the differences between them, when to use them, and how to create them.

Read more at Red Hat Developers 

The Central Security Project: Vulnerability Reporting for Open Source Java

When a security researcher finds a security bug, what do they do? Unfortunately, the answer sometimes is they search for the appropriate people to notify and, when they can’t be found, end up posting the vulnerability to public email lists, the GitHub project, or even Twitter.

This is the problem that security platform HackerOne and software supply chain management tool Sonatype have teamed up to solve with The Central Security Project, a new effort that “brings together the ethical hacker and open source communities to streamline the process for reporting and resolving vulnerabilities discovered in libraries housed in The Central Repository, the world’s largest collection of open source components,” according to a statement.

“We have a critical need to centralize security reporting in the open source industry especially given the proliferation of ecosystems like Github which encourage decentralization,” said Blevins. “The Central Security Project is a significant industry milestone that creates an open source reporting ecosystem that can function at GitHub scale.”

Read more at The New Stack

Kubernetes 1.14 Enhances Cloud-Native Platform With Windows Nodes

The first major update of the open-source Kubernetes cloud-native platform in 2019 was released on March 25, with the general availability of Kubernetes 1.14.

Kubernetes is a broadly deployed container orchestration system project that is hosted by the Cloud Native Computing Foundation (CNCF) and benefits from a diverse set of contributors and vendors that support and develop the project. With Kubernetes 1.14, the project is adding 10 new enhancements as stable features that provide new capabilities for users. Among the biggest enhancements is production-level support for Windows nodes.

“I’m proud of just that fact that in Kubernetes 1.14 there are more stable enhancements than any other previous Kubernetes release,” Aaron Crickenberger, Google test engineer and Kubernetes 1.14 release lead, told eWEEK. “The continued focus on stability speaks to this community’s commitment.”

One of the biggest changes overall in Kubernetes 1.14 wasn’t any one specific feature, but rather a new process for defining how and when enhancements are accepted and move through the Kubernetes development cycle. The Kubernetes Enhancement Proposal (KEP) approach was first implemented for Kubernetes 1.14 and helped Crickenberger and the broader community manage the enhancements process.

Read more at eWeek

Using Square Brackets in Bash: Part 1

After taking a look at how curly braces ({}) work on the command line, now it’s time to tackle brackets ([]) and see how they are used in different contexts.

Globbing

The first and easiest use of square brackets is in globbing. You have probably used globbing before without knowing it. Think of all the times you have listed files of a certain type, say, you wanted to list JPEGs, but not PNGs:

ls *.jpg

Using wildcards to get all the results that fit a certain pattern is precisely what we call globbing.

In the example above, the asterisk means “zero or more characters“. There is another globbing wildcard, ?, which means “exactly one character“, so, while


ls d*k*

will list files called darkly and ducky (and dark and duck — remember * can also be zero characters),


ls d*k?

will not list darkly (or dark or duck), but it will list ducky.

Square brackets are used in globbing for sets of characters. To see what this means, make directory in which to carry out tests, cd into it and create a bunch of files like this:


touch file0{0..9}{0..9}

(If you don’t know why that works, take a look at the last installment that explains curly braces {}).

This will create files file000, file001, file002, etc., through file097, file098 and file099.

Then, to list the files in the 70s and 80s, you can do this:


ls file0[78]?

To list file022, file027, file028, file052, file057, file058, file092, file097, and file98 you can do this:


ls file0[259][278]

Of course, you can use globbing (and square brackets for sets) for more than just ls. You can use globbing with any other tool for listing, removing, moving, or copying files, although the last two may require a bit of lateral thinking.

Let’s say you want to create duplicates of files file010 through file029 and call the copies archive010, archive011, archive012, etc..

You can’t do:


cp file0[12]? archive0[12]?

Because globbing is for matching against existing files and directories and the archive… files don’t exist yet.

Doing this:


cp file0[12]? archive0[1..2][0..9]

won’t work either, because cp doesn’t let you copy many files to other many new files. Copying many files only works if you are copying them to a directory, so this:


mkdir archive

cp file0[12]? archive

would work, but it would copy the files, using their same names, into a directory called archive/. This is not what you set out to do.

However, if you look back at the article on curly braces ({}), you will remember how you can use % to lop off the end of a string contained in a variable.

Of course, there is a way you can also lop of the beginning of string contained in a variable. Instead of %, you use #.

For practice, you can try this:


myvar="Hello World"

echo Goodbye Cruel ${myvar#Hello}

It prints “Goodbye Cruel World” because #Hello gets rid of the Hello part at the beginning of the string stored in myvar.

You can use this feature alongside your globbing tools to make your archive duplicates:


for i in file0[12]?;

do

cp $i archive${i#file};

done

The first line tells the Bash interpreter that you want to loop through all the files that contain the string file0 followed by the digits 1 or 2, and then one other character, which can be anything. The second line do indicates that what follows is the instruction or list of instructions you want the interpreter to loop through.

Line 3 is where the actually copying happens, and you use the contents of the loop variable i twice: First, straight out, as the first parameter of the cp command, and then you add archive to its contents, while at the same time cutting of file. So, if i contains, say, file019


"archive" + "file019" - "file" = "archive019"

the cp line is expanded to this:


cp file019 archive019

Finally, notice how you can use the backslash to split a chain of commands over several lines for clarity.

In part two, we’ll look at more ways to use square brackets. Stay tuned.

Why The CDF Launch From Linux Foundation Is Important For the DevOps And Cloud Native Ecosystem

Earlier this month, the Linux Foundation announced the formation of Continuous Delivery Foundation (CDF) – a new foundation that joins the likes of Cloud Native Computing Foundation (CNCF) and Open Container Initiative (OCI).

Continuous Integration and Continuous Delivery (CI/CD) has become an essential building block of modern application lifecycle management. This technique allows business to increase the velocity of delivering software to users. Through CI/CD, what was once confined to large, web-scale companies became available to early-stage startups and enterprises.

With the launch of CDF, Linux Foundation has taken the first step in bringing the most popular CI/CD tools under the same roof. This would enable key contributors such as CloudBees, Netflix, Google, and other members to collaborate rather than duplicating the efforts. The foundation would encourage members to contribute to critical areas of software delivery with an increased focus. This immensely helps the ecosystem in adopting the best of the breed tools and the best practices of implementing CI/CD pipelines. 

Read more at Forbes