Home Blog Page 269

Top 10 New Linux SBCs to Watch in 2019

A recent Global Market Insights report projects the single board computer market will grow from $600 million in 2018 to $1 billion by 2025. Yet, you don’t need to read a market research report to realize the SBC market is booming. Driven by the trends toward IoT and AI-enabled edge computing, new boards keep rolling off the assembly lines, many of them tailored for highly specific applications.

Much of the action has been in Linux-compatible boards, including the insanely popular Raspberry Pi. The number of different vendors and models has exploded thanks in part to the rise of community-backed, open-spec SBCs.

Here we examine 10 of the most intriguing, Linux-driven SBCs among the many products announced in the last four weeks that bookended the recent Embedded World show in Nuremberg. (There was also some interesting Linux software news at the show.) Two of the SBCs—the Intel Whiskey Lake based UP Xtreme and Nvidia Jetson Nano driven Jetson Nano Dev Kit—were announced only this week.

Our mostly open source list also includes a few commercial boards. Processors range from the modest, Cortex-A7 driven STM32MP1 to the high-powered Whiskey Lake and Snapdragon 845. Mid-range models include Google’s i.MX8M powered Coral Dev Board and a similarly AI-enhanced, TI AM5729 based BeagleBone AI. Deep learning acceleration chips—and standard RPi 40-pin or 96Boards expansion connectors—are common themes among most of these boards.

The SBCs are listed in reverse chronological order according to their announcement dates. The links in the product names go to recent LinuxGizmos reports, which link to vendor product pages.

UP XtremeThe latest in Aaeon’s line of community-backed SBCs taps Intel’s 8th Gen Whiskey Lake-U CPUs, which maintain a modest 15W TDP while boosting performance with up to quad-core, dual threaded configurations. Depending on when it ships, this Linux-ready model will likely be the most powerful community-backed SBC around — and possibly the most expensive.

The SBC supports up to 16GB DDR4 and 128GB eMMC and offers 4K displays via HDMI, DisplayPort, and eDP. Other features include SATA, 2x GbE, 4x USB 3.0, and 40-pin “HAT” and 100-pin GPIO add-on board connectors. You also get mini-PCIe and dual M.2 slots that support wireless modems and more SATA options. The slots also support Aaeon’s new AI Core X modules, which offer Intel’s latest Movidius Myriad X VPUs for 1TOPS neural processing acceleration.

Jetson Nano Dev KitNvidia just announced a low-end Jetson Nano compute module that’s sort of like a smaller (70 x 45mm) version of the old Jetson TX1. It offers the same 4x Cortex-A57 cores but has an even lower-end 128-core Maxwell GPU. The module has half the RAM and flash (4GB/16GB) of the TX1 and TX2, and no WiFi/Bluetooth radios. Like the hexa-core Jetson TX2, however, it supports 4K video and the GPU offers similar CUDA-X deep learning libraries.

Although Nvidia has backed all its Linux-driven Jetson modules with development kits, the Jetson Nano Dev Kit is its first community-backed, maker-oriented kit. It does not appear to offer open specifications, but it costs only $99 and there’s a forum and other community resources. Many of the specs match or surpass the Raspberry Pi 3B+, including the addition of a 40-pin GPIO. Highlights include an M.2 slot, GbE with Power-over-Ethernet, HDMI 2.0 and eDP links, and 4x USB 3.0 ports.

Coral Dev Board—Google’s very first Linux maker board arrived earlier this month featuring an NXP i.MX8M and Google’s Edge TPU AI chip—a stripped-down version of Google’s TPU Unit is designed to run TensorFlow Lite ML models. The $150, Raspberry Pi-like Coral Dev Board was joined by a similarly Edge TPU-enabled Coral USB Accelerator USB stick. These will be followed by an Edge TPU based Coral PCIe Accelerator and a Coral SOM compute module. All these devices are backed with schematics, community resources, and other open-spec resources.

The Coral Dev Board combines the Edge TPU chip with NXP’s quad-core, 1.5GHz Cortex-A53 i.MX8M with a 3D Vivante GPU/VPU and a Cortex-M4 MCU. The SBC is even more like the Raspberry Pi 3B+ than Nvidia’s Dev Kit, mimicking the size and much of the layout and I/O, including the 40-pin GPIO connector. Highlights include 4K-ready GbE, HDMI 2.0a, 4-lane MIPI-DSI and CSI, and USB 3.0 host and Type-C ports.

SBC-C43—Seco’s commercial, industrial temperature SBC-C43 board is the first SBC based on NXP’s high-end, up to hexa-core i.MX8. The 3.5-inch SBC supports the i.MX8 QuadMax with 2x Cortex-A72 cores and 4x Cortex-A53 cores, the QuadPlus with a single Cortex-A72 and 4x -A53, and the Quad with no -A72 cores and 4x -A53. There are also 2x Cortex-M4F real-time cores and 2x Vivante GPU/VPU cores. Yocto Project, Wind River Linux, and Android are available.

The feature-rich SBC-C43 supports up to 8GB DDR4 and 32GB eMMC, both soldered for greater reliability. Highlights include dual GbE, HDMI 2.0a in and out ports, WiFi/Bluetooth, and a variety of industrial interfaces. Dual M.2 slots support SATA, wireless, and more.

Nitrogen8M_Mini—This Boundary Devices cousin to the earlier, i.MX8M based Nitrogen8M is available for $135, with shipments due this Spring. The open-spec Nitrogen8M_Mini is the first SBC to feature NXP’s new i.MX8M Mini SoC. The Mini uses a more advanced 14LPC FinFET process than the i.MX8M, resulting in lower power consumption and higher clock rates for both the 4x Cortex-A53 (1.5GHz to 2GHz) and Cortex-M4 (400MHz) cores. The drawback is that you’re limited to HD video resolution.

Supported with Linux and Android, the Nitrogen8M_Mini ships with 2GB to 4GB LPDDR4 RAM and 8GB to 128GB eMMC. MIPI-DSI and -CSI interfaces support optional touchscreens and cameras, respectively. A GbE port is standard and PoE and WiFi/BT are optional. Other features include 3x USB ports, one or two PCIe slots, and optional -40 to 85°C support. A Nitrogen8M_Mini SOM module with similar specs is also in the works.

Pine H64 Model B—Pine64’s latest hacker board was teased in late January as part of an ambitious roll-out of open source products, including a laptop, tablet, and phone. The Raspberry Pi semi-clone, which recently went on sale for $39 (2GB) or $49 (3GB), showcases the high-end, but low-cost Allwinner H64. The quad -A53 SoC is notable for its 4K video with HDR support.

The Pine H64 Model B offers up to 128GB eMMC storage, WiFi/BT, and a GbE port. I/O includes 2x USB 2.0 and single USB 3.0 and HDMI 2.0a ports plus SPDIF audio and an RPi-like 40-pin connector. Images include Android 7.0 and an “in progress” Armbian Debian Stretch.

AI-ML Board—Arrow unveiled this i.MX8X based SBC early this month along with a similarly 96Boards CE Extended format, i.MX8M based Thor96 SBC. While there are plenty of i.MX8M boards these days, we’re more intrigued with the lowest-end i.MX8X member of the i.MX8 family. The AI-ML Board is the first SBC we’ve seen to feature the low-power i.MX8X, which offers up to 4x 64-bit, 1.2GHz Cortex-A35 cores, a 4-shader, 4K-ready Vivante GPU/VPU, a Cortex-M4F chip, and a Tensilica HiFi 4 DSP.

The open-spec, Yocto Linux driven AI-ML Board is targeted at low-power, camera-equipped applications such as drones. The board has 2GB LPDDR4, Ethernet, WiFi/BT, and a pair each of MIPI-DSI and USB 3.0 ports. Cameras are controlled via the 96Boards 60-pin, high-power GPIO connector, which is joined by the usual 40-pin low-power link. The launch is expected June 1.

BeagleBone AI—The long-awaited successor to the Cortex-A8 AM3358 based BeagleBone family of boards advances to TIs dual-core Cortex-A15 AM5729, with similar PowerVR GPU and MCU-like PRU cores. The real story, however, is the AI firepower enabled by the SoC’s dual TI C66x DSPs and four embedded-vision-engine (EVE) neural processing cores. BeagleBoard.org claims that calculations for computer-vision models using EVE run at 8x times the performance per watt compared to the similar, but EVE-less, AM5728. The EVE and DSP chips are supported through a TIDL machine learning OpenCL API and pre-installed tools.

Due to go on sale in April for about $100, the Linux-powered BeagleBone AI is based closely on the BeagleBone Black and offers backward header, mechanical, and software compatibility. It doubles the RAM to 1GB and quadruples the eMMC storage to 16GB. You now get GbE and high-speed WiFi, as well as a USB Type-C port.

Robotics RB3 Platform (DragonBoard 845c)—Qualcomm and Thundercomm are initially launching their 96Boards CE form factor, Snapdragon 845-based upgrade to the Snapdragon 820-based DragonBoard 820c SBC as part of a Qualcomm Robotics RB3 Platform. Yet, 96Boards.org has already posted a DragonBoard 845c product page, and we imagine the board will be available in the coming months without all the robotics bells and whistles. A compute module version is also said to be in the works.

The 10nm, octa-core, “Kryo” based Snapdragon 845 is one of the most powerful Arm SoCs around. It features an advanced Adreno 630 GPU with “eXtended Reality” (XR) VR technology and a Hexagon 685 DSP with a third-gen Neural Processing Engine (NPE) for AI applications. On the RB3 kit, the board’s expansion connectors are pre-stocked with Qualcomm cellular and robotics camera mezzanines. The $449 and up kit also includes standard 4K video and tracking cameras, and there are optional Time-of-Flight (ToF) and stereo SLM camera depth cameras. The SBC runs Linux with ROS (Robot Operating System).

Avenger96—Like Arrow’s AI-ML Board, the Avenger96 is a 96Boards CE Extended SBC aimed at low-power IoT applications. Yet, the SBC features an even more power-efficient (and slower) SoC: ST’s recently announced STM32MP153. The Avenger96 runs Linux on the high-end STM32MP157 model, which has dual, 650MHz Cortex-A7 cores, a Cortex-M4, and a Vivante 3D GPU.

This sandwich-style board features an Avenger96 module with the STM32MP157 SoC, 1GB of DDR3L, 2MB SPI flash, and a power management IC. It’s unclear if the 8GB eMMC and WiFi-ac/Bluetooth 4.2 module are on the module or carrier board. The Avenger96 SBC is further equipped with GbE, HDMI, micro-USB OTG, and dual USB 2.0 host ports. There’s also a microSD slot and the usual 40- and 60-pin GPIO connectors. The board is expected to go on sale in April.

Why Unikernels Are Great for DevOps

Unikernels are single-purpose virtual machines (VM). They only run one application—which is interesting when you think about it, because that’s precisely how a lot of DevOps practitioners provision their applications today. There is way too much software to provision nowadays—pools of app servers, clusters of databases, etc. Unikernels are fast to boot, fast to run, small in size, way more secure than Linux or containers and, if you have access to your own bare metal—either in your data center or through a provider—you can run thousands of them per box.

Some stats? Some can boot in less than five milliseconds—that’s slightly slower than calling fork (three milliseconds) but orders of magnitude faster than Linux. Some can run databases including Cassandra and MySQL, 20 percent faster. Some people (yours truly included) have provisioned thousands of them on a single box with no performance degradation. Some are reporting them running as a VM faster than a comparable bare-metal Linux installation of that application. Clearly, this is worth investigating.

Read more at DevOps.com

Datapractices.org Joins the Linux Foundation to Advance Best Practices and Offers Open Courseware Across Data Ecosystem

The Linux Foundation has announced datapractices.org,  a vendor-neutral community working on the first-ever template for modern data teamwork, has joined as an official Linux Foundation project.

DataPractices.org was pioneered by data.world as a “Manifesto for Data Practices” of four values and 12 principles that illustrate the most effective, ethical, and modern approach to data teamwork. As a member of the foundation, datapractices.org will expand to offer open courseware and establish a collaborative approach to defining and refining data best practices. 

We talked with Patrick McGarry, head of data.world, to learn more about DataPractices.org.

LF: Can you briefly describe datapractices.org and tell us about its history?
Patrick:
 The Data Practices movement originated back in 2017 at The Open Data Science Leadership Summit in San Francisco. This event gathered together leaders in data science, semantics, open source, visualization, and industry to discuss the current state of the data community. We discovered that there were many similarities between the then current challenges around data, and the previous difficulties felt in software development that Agile addressed.

The goal of the Data Practices movement was to start a similar “Agile for Data” movement that could help offer direction and improved data literacy across the ecosystem. While the first step was the “Manifesto for Data Practices” the intent was always to move past that and apply the values and principles to a series of free and open courseware that could benefit anyone who was interested.

Read more at Linux Foundation

Printing from the Linux Command Line

Printing from the Linux command line is easy. You use the lp command to request a print, and lpq to see what print jobs are in the queue, but things get a little more complicated when you want to print double-sided or use portrait mode. And there are lots of other things you might want to do — such as printing multiple copies of a document or canceling a print job. Let’s check out some options for getting your printouts to look just the way you want them to when you’re printing from the command line.

Displaying printer settings

To view your printer settings from the command line, use the lpoptions command. The output should look something like this:

$ lpoptions
copies=1 device-uri=dnssd://HP%20Color%20LaserJet%20CP2025dn%20(F47468)._pdl-datastream._tcp.local/ 
  finishings=3 job-cancel-after=10800 job-hold-until=no-hold job-priority=50 job-sheets=
  none,none marker-change-time=1553023232 marker-colors=#000000,#00FFFF,#FF00FF,#FFFF00 
  marker-levels=18,62,62,63 marker-names='Black Cartridge HP CC530A,Cyan Cartridge 
  HP CC531A,Magenta Cartridge HP CC533A,Yellow Cartridge HP CC532A' marker-types=
  toner,toner,toner,toner number-up=1 printer-commands=none printer-info='HP Color LaserJet 
  CP2025dn (F47468)' printer-is-accepting-jobs=true printer-is-shared=true printer-is-temporary=
  false printer-location printer-make-and-model='HP Color LaserJet cp2025dn pcl3, hpcups 3.18.7' 
  printer-state=3 printer-state-change-time=1553023232 printer-state-reasons=none printer-type=
  167964 printer-uri-supported=ipp://localhost/printers/Color-LaserJet-CP2025dn sides=one-sided   

This output is likely to be a little more human-friendly if you turn its blanks into carriage returns. Notice how many settings are listed.

Read more at Network World

How to Set the PATH Variable in Linux

In Linux, your PATH is a list of directories that the shell will look in for executable files when you issue a command without a path. The PATH variable is usually populated with some default directories, but you can set the PATH variable to anything you like.

When a command name is specified by the user or an exec call is made from a program, the system searches through $PATH, examining each directory from left to right in the list, looking for a filename that matches the command name. Once found, the program is executed as a child process of the command shell or program that issued the command.  -Wikipedia

In this short tutorial, we will discuss how to add a directory to your PATH and how to make the changes permanent. Although there is no native way to delete a default directory from your path, we will discuss a workaround. We will end by creating a short script and placing it in our newly created PATH to demonstrate the benefits of the PATH variable.

Read more at Putorius

BPF: A Tour of Program Types

Notes on BPF (1) – A Tour of Progam Types

Oracle Linux kernel developer Alan Maguire presents this six-part series on BPF, wherein he presents an in depth look at the kernel’s “Berkeley Packet Filter” — a useful and extensible kernel function for much more than packet filtering.

If you follow Linux kernel development discussions and blog posts, you’ve probably heard BPF mentioned a lot lately. It’s being used for high-performance load-balancing, DDoS mitigation and firewalling, safe instrumentation of kernel and user-space code and much more! BPF does this by supporting a safe flexible programming environment in many different contexts; networking datapaths, kernel probes, perf events and more. Safety is key – in most environments, adding a kernel module introduces significant risk. BPF programs however are verified at program load time to ensure no out-of-bounds accesses occur etc. In addition BPF supports just-in-time compilation of its bytecode to the native instructions set, so BPF programs are also fast. If you’re interested in topics like fast packet processing and observability, learning BPF should definitely be on your to-do list.

Here we try to give a guide to BPF, covering a range of topics which will hopefully help developers trying to get to grips with writing BPF programs. This guide is based on Linux 4.14 (which is the kernel for Oracle Linux UEK5), so do bear that in mind as there have been a bunch of changes in BPF since, and some package names etc may differ for other distributions.

Because BPF in Linux is such a fast-moving target, I’m going to try and point you at relevant places in the kernel codebase that may help you get a sense for what the technology can do. The samples/bpf directory is a great place to look to see what others have done, but here we’ll also dig into the implementation as reference, as it may give you some ideas how to create new BPF programs. The aim here isn’t to give a deep dive into BPF internals, but rather to give a few pointers to areas in the code which reveal BPF functionality. The source tree I’m using for reference is our UEK5 release, based on Linux 4.14.35. See https://github.com/oracle/linux-uek/tree/uek5/master . Most of the functionality described can be found in any recent kernel. The bpf-next tree (where BPF kernel development happens) can be found at

https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git

An important caveat; again, what is below describes the state as per the 4.14 kernel. A lot has changed since; but hopefully with the pointers into the code, you’ll be better equipped to figure out what some of these changes are!

The aim here is to be able to get to the point of working on interesting problems with BPF. However before we get there, let’s look at the various pieces and how they fit together.

The first question to ask is what can we do with BPF? What kinds of programs can we write?

To get a sense for this, let’s examine the enumerated type definition from include/uapi/linux/bpf.h https://github.com/oracle/linux-uek/blob/uek5/master/include/uapi/linux/bpf.h#L117

enum bpf_prog_type {
    BPF_PROG_TYPE_UNSPEC,
    BPF_PROG_TYPE_SOCKET_FILTER,
    BPF_PROG_TYPE_KPROBE,
    BPF_PROG_TYPE_SCHED_CLS,
    BPF_PROG_TYPE_SCHED_ACT,
    BPF_PROG_TYPE_TRACEPOINT,
    BPF_PROG_TYPE_XDP,
    BPF_PROG_TYPE_PERF_EVENT,
    BPF_PROG_TYPE_CGROUP_SKB,
    BPF_PROG_TYPE_CGROUP_SOCK,
    BPF_PROG_TYPE_LWT_IN,
    BPF_PROG_TYPE_LWT_OUT,
    BPF_PROG_TYPE_LWT_XMIT,
    BPF_PROG_TYPE_SOCK_OPS,
    BPF_PROG_TYPE_SK_SKB,
};

What are all of these program types? To understand this, we will ask the same set of questions for each program type:

  • what do I do with this program type?
  • how do I attach my BPF program for this program type?
  • what context is provided to my program? By this we mean what argument(s) and data are provided for us to work with.
  • when does the attached program get run? It’s important to understand this, as it gives a sense of where for example in the network stack a filter is applied.

We won’t worry about how you create the programs for now; that side of things is relatively uniform across the various program types.

1. socket-related program types – SOCKET_FILTER, SK_SKB, SOCK_OPS

First, let’s consider the socket-related program types which allow us to filter, redirect socket data and monitor socket events. The filtering use case relates to the origins of BPF. When observing the network we want to see only a portion of network traffic, for example all traffic from a troublesome system. Filters are used to describe the traffic we want to see, and ideally we want it to be fast, and we want to give users an open-ended set of filtering options. But we have a problem; we want to throw away unneeded data as early as possible, and to do that we need to filter in kernel context. Consider the alternative to an in-kernel solution – incurring the cost of copying packets to user-space and filtering there. That would be very expensive, especially if we only want to see a portion of the network traffic and throw away the rest.To achieve this, a safe mini-language was invented to translate high-level filters into a bytecode program that the kernel can use (termed classic BPF, cBPF). The aim of the language was to support a flexible set of filtering options while being fast and safe. Filters written in this assembly-like language could be pushed by userspace programs such as tcpdump to accomplish filtering in-kernel. See

https://www.tcpdump.org/papers/bpf-usenix93.pdf

…for the classic paper describing this work. Modern eBPF took these concepts, expanded the register and instruction set, added data structures called maps, hugely expanded the kinds of events we can attach to, and much more!

For socket filtering, the common case is to attach to a raw socket (SOCK_RAW), and in fact you’ll notice most programs that do socket filtering have a line like this:

    s = socket(AF_PACKET,SOCK_RAW,htons(ETH_P_ALL));

Creating such a socket, we specify the domain (AF_PACKET), socket type (SOCK_RAW) and protocol (all packet types). In the Linux kernel, receive of raw packets is implemented by the raw_local_deliver() function. It is called called by ip_local_deliver_finish(), just prior to calling the relevant IP protocol’s handler, which is where the packet is passed to TCP, UDP, ICMP etc. So at this point the traffic has not been associated with a specific socket; that happens later, when the IP stack figures out the mapping from packet to layer 4 protocol, and then to the relevant socket (if any). You can see the cBPF bytecodes generated by tcpdump by using the -d option. Here I want to run tcpdump on the wlp4s0 interface, filtering TCP traffic only:

# tcpdump -i wlp4s0 -d 'tcp'
(000) ldh      [12]
(001) jeq      #0x86dd          jt 2    jf 7
(002) ldb      [20]
(003) jeq      #0x6             jt 10    jf 4
(004) jeq      #0x2c            jt 5    jf 11
(005) ldb      [54]
(006) jeq      #0x6             jt 10    jf 11
(007) jeq      #0x800           jt 8    jf 11
(008) ldb      [23]
(009) jeq      #0x6             jt 10    jf 11
(010) ret      #65535
(011) ret      #0

Without much deep knowledge we can get a feel for what’s happening here. On line 000 we load the offset of the ether header + 12 ; the ether header protocol type. On line 001, we jump to 002 if it matches ETH_P_IPv6 (0x86dd) (jt 2), otherwise jump to 007 if false (jf 7) (handle the IPv4 case).

Let’s look at the IPv6 case first. On line 003 we jump to 010 – success – if the IPv6 protocol (offset + 20) is 6 (IPPROTO_TCP) – line 010 returns 65535 which is the max length so we’re accepting the packet. Otherwise we jump to 004. Here we compare to 0x2c, which indicates there’s an IPv6 fragment header. If that’s true we check if the fragment header (offset 54) specifies a next protocol value as IPPROTO_TCP, and if so we jump to 10 (success) or 11 (failure). Returning 0 means dropping the packet for filtering purposes.

Handling IPv4 is simpler; on 007 (arrived at via “jf” on 001), we check for ETH_P_IPV4 and, if found, we verify that the IPPROTO is TCP. And we’re done! Remember though this is cBPF; eBPF has an extended instruction/op set similar to x86_64 and additional registers.

One other thing to note – socket filtering is distinct from netfilter-based filtering. Netfilter defines its own set of hooks with NF_HOOK() definitions, which netfilter-based technologies such as ipfilter can use to filter traffic also. You might think – couldn’t we use eBPF there too? And you’d be right! bpfilter is replacing ipfilter in more recent Linux kernels.

So with all that in mind, let’s return to examining the socket-related program types.

1.1 BPF_PROG_TYPE_SOCKET_FILTER

  • What do I do with it? The filtering actions include dropping packets (if the program returns 0) or trimming packets (if the program returns a length less than the original). See sk_filter_trim_cap() and its call to bpf_prog_run_save_cb(). Note that we’re not trimming or dropping the original packet which would still reach the intended socket intact; we’re working with a copy of the packet metadata which raw sockets can access for observability. In addition to filtering packet flow to our socket, we can also do things that have side-effects; for example collecting statistics in BPF maps.

  • How do I attach my program? BPF programs can be attached to sockets via the SO_ATTACH_BPF setsockopt(), which passes in a file descriptor to the program.

  • What context is provided? A pointer to the struct __sk_buff containing packet metadata/data. This structure is defined in include/linux/bpf.h, and includes key fields from the real sk_buff. The bpf verifier converts access to valid __sk_buff fields into offsets into the “real” sk_buff, see https://lwn.net/Articles/636647/ for more details.

  • When does it run? Socket filters run for receive in sock_queue_rcv_skb() which is called by various protocols (TCP, UDP, ICMP, raw sockets etc) and can be used to filter inbound traffic.

To give a sense for what programs look like, here we will create a filter that trims packet data we filter on the basis of protocol type; for IPv4 TCP, let’s grab the IPv4 + TCP header only, while for UDP, we’ll take the IPv4 and UDP header only. We won’t deal with IPv4 options as it’s a simple example, so in all other cases we return 0 (drop packet).

#include <linux/bpf.h>
#include <linux/in.h>
#include <linux/types.h>
#include <linux/string.h>
#include <linux/if_ether.h>
#include <linux/if_packet.h>
#include <linux/ip.h>
#include <linux/tcp.h>
#include <linux/udp.h>
#include "bpf_helpers.h"

#ifndef offsetof
#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
#endif

/*
 * We are only interested in TCP/UDP headers, so drop every other protocol
 * and trim packets after the TCP/UDP header by returning length of
 * ether header + IPv4 header + TCP/UDP header.
 */

SEC("socket")
int bpf_prog1(struct __sk_buff *skb)
{
        int proto = load_byte(skb, ETH_HLEN + offsetof(struct iphdr, protocol));
        int size = ETH_HLEN + sizeof(struct iphdr);

        switch (proto) {
        case IPPROTO_TCP:
             size += sizeof(struct tcphdr);
             break;
        case IPPROTO_UDP:
             size += sizeof(struct udphdr);
             break;
        default:
             size = 0;
             break;
        }
        return size;
}

char _license[] SEC("license") = "GPL";

This program can be compiled into BPF bytecodes using LLVM/clang by specifying arch of “bpf” , and once that is done it will contain an object with an ELF section called “socket”. That is our program. The next step is to use the BPF system call to assign a file descriptor to the program, then attach it to the socket. In samples/bpf , you can see that bpf_load.c scans the ELF sections, and sections with name prefixed by “socket” are recognized as BPF_PROG_TYPE_SOCKET_FILTER programs. If you’re adding a sample I’d recommend including bpf_load.h so you can just call load_bpf_file() on your BPF program. For example, in samples/bpf/sockex1_user.c we take the filename of our program (sockex1) and load sockex1_kern.o ; the associated BPF program. Then we open a raw socket to loopback (lo) and attach the program there:

        snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
        if (load_bpf_file(filename)) {
                printf("%s", bpf_log_buf);
                return 1;
        }
        sock = open_raw_sock("lo");
        assert(setsockopt(sock, SOL_SOCKET, SO_ATTACH_BPF, prog_fd,
                          sizeof(prog_fd[0])) == 0);

1.2 BPF_PROG_TYPE_SOCK_OPS

  • What do I do with it? Attach a BPF program to catch socket operations such as connection establishment, retransmit timeout etc. Once caught options can also be set via bpf_setsockopt(), so for example on passive establishment of a connection from a system not on the same subnet, we could lower the MTU so we won’t have to worry about intermediate routers fragmenting packets. Programs can return success (0) or failure (a negative value) and a reply value can be set to indicate the desired value for a socket option (e.g. TCP rwnd). See https://lwn.net/Articles/727189/ for full details, and look for tcp_call_bpf()s inline definition in include/net/tcp.h to see how TCP handles execution of such programs. Another use case is for sockmap updates in combination with BPF_PROG_TYPE_SK_SKB programs; the bpf_sock_ops struct pointer passed into the BPF_PROG_TYPE_SOCK_OPS program is used to update the sockmap, associating a value for that socket. Later sk_skb programs can reference those values to specify which socket to redirect to via bpf_sk_redirect_map() calls. If this sounds confusing, I’d recommend taking a look at the code in samples/sockmap.

  • How do I attach my program? It is attached to a cgroup file descriptor using BPF_CGROUP_SOCK_OPS attach type.

  • What context is provided? Argument provided is the context, struct bpf_sock_ops *.. Op field specifies the operatiion, BPF_SOCK_OPS_RWND_INIT, BPF_SOCK_OPS_TCP_CONNECT_CB etc. The reply field can be used to indicate to the caller a new value for a parameter set.

/* User bpf_sock_ops struct to access socket values and specify request ops
 * and their replies.
 * Some of this fields are in network (bigendian) byte order and may need
 * to be converted before use (bpf_ntohl() defined in samples/bpf/bpf_endian.h).
 * New fields can only be added at the end of this structure
 */
struct bpf_sock_ops {
    __u32 op;
    union {
        __u32 reply;
        __u32 replylong[4];
    };
    __u32 family;
    __u32 remote_ip4;    /* Stored in network byte order */
    __u32 local_ip4;    /* Stored in network byte order */
    __u32 remote_ip6[4];    /* Stored in network byte order */
    __u32 local_ip6[4];    /* Stored in network byte order */
    __u32 remote_port;    /* Stored in network byte order */
    __u32 local_port;    /* stored in host byte order */
};
  • When does it run? As per the above article, unlike other BPF program types that expect to be called at a particular place in the codebase, SOCK_OPS program can be called at different places and use an “op” field to indicate that context. See include/uapi/linux/bpf.h for the enumerated BPF_SOCK_OPS_* definitions, but they include events like retransmit timeout, connection establishment etc.

1.3 BPF_PROG_TYPE_SK_SKB

  • What do I do with it? Allows users to access skb and socket details such as port and IP address with a view to supporting redirect of skbs between sockets. See https://lwn.net/Articles/731133/ . This functionality is used in conjunction with a sockmap – a special-purpose BPF map that contains references to socket structures and associated values. sockmaps are used to support redirection. The program is attached and the bpf_sk_redirect_map() helper can be used to carry out the redirection. The general approach we catch socket creation events with sock_ops BPF programs, associate values with the sockmap for these, and then use data at the sk_skb instrumentation points to inform socket redirection – this is termed the verdict, and the program for this is attached to the sockmap via BPF_SK_SKB_STREAM_VERDICT. The verdict can be __SK_DROP, __SK_PASS, or __SK_REDIRECT. Another use case for this program type is in the strparser framework (https://www.kernel.org/doc/Documentation/networking/strparser.txt). BPF programs can be used to parse message streams via callbacks for read operations, verdict and read completion. TLS and KCM use stream parsing.

  • How do I attach my program? A redirection progaram is attached to a sockmap as BPF_SK_SKB_STREAM_VERDICT; it should return the result of bpf_sk_redirect_map(). A strparser program is attached via BPF_SK_SKB_STREAM_PARSER and should return the length of data parsed.

  • What context is provided? A pointer to the struct __sk_buff containing packet metadata/data. However more fields are accessible to the sk_skb program type. The extra set of fields available are documented in include/linux/bpf.h like so:

/* Accessed by BPF_PROG_TYPE_sk_skb types from here to ... */
__u32 family;
__u32 remote_ip4;   /* Stored in network byte order */
__u32 local_ip4;    /* Stored in network byte order */
__u32 remote_ip6[4];    /* Stored in network byte order */
__u32 local_ip6[4]; /* Stored in network byte order */
__u32 remote_port;  /* Stored in network byte order */
__u32 local_port;   /* stored in host byte order */
/* ... here. */

So from the above alone we can see we can gather information about the socket, since the above represents the key information that identifies the socket uniquely (protocol is already available in the globally-accessible portion of the struct __sk_buff).

  • When does it run? A stream parser can be attached to a socket via BPF_SK_SKB_STREAM_PARSER attachment to a sockmap, and the parser runs on socket receive via smap_parse_func_strparser() in kernel/bpf/sockmap.c . BPF_SK_SKB_STREAM_VERDICT also attaches to the sockmap, and is run via smap_verdict_func().

2. tc (traffic control) subsystem programs

Next let’s examine the program type related to the TC kernel packet scheduling subsystem. See the tc(8) manpage for a general introduction, and tc-bpf(8) for BPF specifics.

2.1 tc_cls_act : qdisc classifier

  • What do I do with it? tc_cls_act allows us to use BPF programs as classifiers and actions in tc, the Linux QoS subsystem. What’s even better is the tc(8) command has eBPF support also, so we can directly load BPF programs as classifiers and actions for inbound (ingress) and outbound (egress) traffic. See http://man7.org/linux/man-pages/man8/tc-bpf.8.html for a description of how to use tc’s BPF functionality. tc programs can classify, modify, redirect or drop packets.

  • How do I attach my program? tc(8) can be used; see tc-bpf(8) for details. The basics are we create a “clsact” qdisc for a network device, and add ingress and egress classifiers/actoins by specifying the BPF object and relevant ELF section. Example, to add an ingress classifier to eth0 in ELF section my_elf_sec from myprog_kernel.o (a bpf-bytecode-compiled object file):

# tc qdisc add dev eth0 clsact
# tc filter add dev eth0 ingress bpf da obj myprog_kernel.o sec my_elf_sec
  • What context is provided? A pointer to struct __sk_buff packet metadata/data.

  • When does it get run? As mentioned above, classifier qdisc must be added, and once it is we can attach BPF programs to classify inbound and outbound traffic. Implementation-wise, act_bpf.c and cls_bpf.c implement action/classifier modules. On ingress/gress sch_handle_ingress()/egress() call tcf_classify(). In the case of ingress, we do classification via the core network interface receive function, so we are getting the packet after the driver has processed it but before IP etc see it. On egress, the filtering is done prior to submitting to the device queue for transmit.

3. xdp : the Xpress Data Path.

The key design goal for XDP is to introduce programmability in the network datapath. The aim is to provide the XDP hooks as close to the device as possible (before the OS has created sk_buff metadata) to maximize performace while supporting a common infrastructure across devices. To support XDP like this requires driver changes. For an example see drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c. A bpf net device op (ndo_bpf) is added. For bnxt it supports XDP_SETUP_PROG and XDP_QUERY_PROG actions; the former configures the device for XDP, reserving rings and setting the program as active. The latter returns the BPF program id. BPF-specific transmit and receive functions are provided and called by the real send/receive functions if needed.

3.1 BPF_PROG_TYPE_XDP

  • What do I do with it? XDP allows access to packet data as early as possible, before packet metadata (struct sk_buff) has been assigned. Thus it is a useful place to do DDoS mitigation or load balancing since such activities can often avoid the expensive overhead of sk_buff allocation. XDP is all about supporting run-time programming of the kernel in via BPF hooks, but by working in concert with the kernel itself; i.e. not a kernel bypass mechanism. Actions supported include XDP_PASS (pass into network processing as usual), XDP_DROP (drop), XDP_TX (transmit) and XDP_REDIRECT. See include/uapi/linux/bpf.h for the “enum xdp_action”.

  • How do I attach my program? Via netlink socket message. A netlink socket – socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE) – is created and bound, and then we send a netlink message of type NLA_F_NESTED | 43 ; this specifies XDP message. The message contains the BPF fd, the interface index (ifindex). See samples/bpf/bpf_load.c for an example.

  • What context is provided? An xdp metadata pointer; struct xdp_md * . XDP metadata is deliberately lightweight; from include/uapi/linux/bpf.h:

/* user accessible metadata for XDP packet hook
 * new fields must be added to the end of this structure
 */
struct xdp_md {
        __u32 data;
        __u32 data_end;
};
  • When does it get run? “Real” XDP is implemented at the driver level, and transmit/receive ring resources are set aside for XDP usage. For cases where drivers do not support XDP, there is the option of using “generic” XDP, which is implemented in net/core/dev.c. The downside of this is we do not bypass skb allocation, it just allows us to use XDP for such devices also.

4. kprobes, tracepoints and perf events

kprobes, tracepoints and perf events all provide kernel instrumentation. kprobes – https://www.kernel.org/doc/Documentation/kprobes.txt – allow instrumentation of specific functions – entry of a function can be monitored via a kprobe, along with most instructions within a function, or entry/return can be instrumented via a kretprobe. When one of these probes is enabled, the code at the enable point is saved, and replaced with a breakpoint instruction. When this breakpoint is hit a trap instruction is generated, registers are saved and we branch to the relevant instrumentation handler. For example, kprobes are handled by kprobe_dispatcher() which gets the address of the kprobe and register context as arguments. kretprobes are implemented via kprobes; a kprobe fires on entry and modifies the return address, saving the original and replacing it with the location of the instrumentation handler. Tracepoints – https://www.kernel.org/doc/Documentation/trace/tracepoints.rst – are similar, but ratther than being enabled at particular instructions, they are explicitly marked at sites in code, and if enabled can be used to collect debugging information at those sites of interest. The same tracepoint can be declared in multiple places; for example trace_drv_return_int() is called in multiple places in net/mac80211/driver-ops.c .

Perf events – https://perf.wiki.kernel.org/index.php/Main_Page – are the basis for eBPF support for these program types. BPF essentially piggy-backs on the existing infrastructure for event sampling, allowing us to attach programs to perf events of interest, which include kprobes, uprobes, tracepoints etc as well has other software events, and indeed hardware events can be monitored too.

These instrumentation points are what gives BPF the capability to be a general-purpose tracing tool as well as a means for supporting the original networking-centric use cases like socket filtering.

4.1 BPF_PROG_TYPE_KPROBE

  • What do I do with it? instrument code in any kernel function (bar a few exceptions) via kprobe, or instrument entry/return via kretprobe. k[ret]probe_perf_func() executes a BPF program attached to the probe point. Note that this program type can also be used to attach to u[ret]probes – see https://www.kernel.org/doc/Documentation/trace/uprobetracer.txt for details

  • How do I attach my program? When the kprobe is created via sysfs, it has an id associated with it, stored in /sys/kernel/debug/tracing/events/[uk]probe//id , /sys/kernel/debug/tracing/events/[uk]retprobe/probename/id . https://www.kernel.org/doc/Documentation/trace/kprobetrace.txt contains details on how to create a kprobe using sysfs.For example, to create a probe called “myprobe” on entry to tcp_retransmit_skb() and retrieve its id:

# echo 'p:myprobe tcp_retransmit_skb' > /sys/kernel/debug/tracing/kprobe_events
# cat /sys/kernel/debug/tracing/events/kprobes/myprobe/id
2266

We can use that probe id to open a perf event, enable it, and set the BPF program for that perf event to be our program. See samples/bpf/bpf_load.c in the load_and_attach() function for how this can be done for k[ret]probes. The code might look something like this:

        struct perf_event_attr attr;
        int eventfd, programfd;
        int probeid;

        /* Load BPF program and assign programfd to it; and get probeid of probe from sysfs */
        attr.type = PERF_TYPE_TRACEPOINT;
        attr.sample_type = PERF_SAMPLE_RAW;
        attr.sample_period = 1;
        attr.wakeup_events = 1;
        attr.config = probeid;
        eventfd = sys_perf_event_open(&attr, -1, 0, programfd, 0);
        if (eventfd < 0)
                return -errno;
        if (ioctl(eventfd, PERF_EVENT_IOC_ENABLE, 0)) {
                close(eventfd);
                return -errno;
        }
        if (ioctl(eventfd, PERF_EVENT_IOC_SET_BPF, programfd)) {
                close(eventfd);
                return -errno;
        }
  • What context is provided? A struct pt_regs *ctx , from which the registers can be accessed. Much of this is platform-specific, but some general-purpose functions exist such as regs_return_value(regs), which returns the value of the register than holds the function return value (regs→ax on x86).

  • When does it get run? When the probe is enabled and breakpoint is hit, k[ret]probe_perf_func() executes a BPF program attached to the probe point via trace_call_bpf(). Similar story for u[ret]probe_perf_func().

4.2 BPF_PROG_TYPE_TRACEPOINT

  • What do I do with it? Instrument tracepoints in kernel code. Tracepoints can be enabled via sysfs as is the case with kprobes, and in a similar way. The list of trace events can be seen under /sys/kernel/debug/tracing/events.

  • How do I attach my program? As we saw above, when the tracepoint is created via sysfs, it has an id associated with it. We can use that probe id to open a perf event, enable it, and set the BPF program for that perf event to be our program. See samples/bpf/bpf_load.c in the load_and_attach() function for how this can be done for tracepoints; the above code snippet for kprobes works for tracepoints also. As an example showing how tracepoints are enabled, here we enable the net/net_dev_xmit tracepoint as “myprobe2” and retrieve its id:

# echo 'p:myprobe2 trace:net/net_dev_xmit' > /sys/kernel/debug/tracing/kprobe_events
# cat /sys/kernel/debug/tracing/events/kprobes/myprobe2/id
2270
  • What context is provided? The context provided by the specific tracepoint; arguments and data types are associated with the tracepoint definition.

  • When does it get run? When the tracepoint is enabled and hit, perf_trace_() (see definition in include/trace/perf.h) calls perf_trace_run_bpf_submit() which will invoke the bpf program via trace_call_bpf().

4.3 BPF_PROG_TYPE_PERF_EVENT

  • What do I do with it? Instrument software and hardware perf events. These include events like syscalls, timer expiry, sampling of hardware events, etc. Hardware events include PMU events (processor monitoring unit) which tell us things like how many instructions completed etc. Perf event monitoring can be targeted at a specific process or group, processor, and a sample period can be specified for profiling.

  • How do I attach my program? A similar model as per the above; we call perf_event_open() with an attribute set, enable the perf event via PERF_EVENT_IOC_ENABLE ioctl(), and set the bpf program via PERF_EVENT_IOC_SET_BPF ioctl(). For PMU (processor monitoring unit) perf event example, see these snippets from samples/bpf/sampleip_user.c:

        ...
        struct perf_event_attr pe_sample_attr = {
                .type = PERF_TYPE_SOFTWARE,
                .freq = 1,
                .sample_period = freq,
                .config = PERF_COUNT_SW_CPU_CLOCK,
                .inherit = 1,
        };
                ...
        pmu_fd[i] = sys_perf_event_open(&pe_sample_attr, -1 /* pid */, i,
                                        -1 /* group_fd */, 0 /* flags */);
        if (pmu_fd[i] < 0) {
                fprintf(stderr, "ERROR: Initializing perf samplingn");
                return 1;
        }

        assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_SET_BPF,
                     prog_fd[0]) == 0);
        assert(ioctl(pmu_fd[i], PERF_EVENT_IOC_ENABLE, 0) == 0);
        ...
  • What context is provided? A struct bpf_perf_event_data *, which looks like this:
struct bpf_perf_event_data {
    struct pt_regs regs;
     __u64 sample_period;
};
  • When does it get run? Depends on the perf event firing and the sample rate chosen, specified by freq and sample_period fields in the perf event attribute structure.

5. cgroups-related program types

CGroups are used to handle resource allocation, allowing or denying access to system resources such as CPU, network bandwidth etc for groups of processes. One key use case for cgroups is containers; a container’s resource access is limited via cgroups while its activities are isolated by the various classes of namespace (network namespace, process ID namespace etc). In the BPF context, we can create eBPF programs that allow or deny access. In include/linux/bpf-cgroup.h we can see definitions for execution of socket/skb programs, where __cgroup_bpf_run_filter_skb is called wrapped in a check that cgroup BPF is enabled:

#define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk, skb)                  
({                                                                 
    int __ret = 0;                                                 
    if (cgroup_bpf_enabled)                                        
        __ret = __cgroup_bpf_run_filter_skb(sk, skb,               
                            BPF_CGROUP_INET_INGRESS);              
                                                                   
    __ret;                                                         
})

#define BPF_CGROUP_RUN_SK_PROG(sk, type)                       
({                                                                 
    int __ret = 0;                                                 
    if (cgroup_bpf_enabled) {                                      
        __ret = __cgroup_bpf_run_filter_sk(sk, type);              
    }                                                              
    __ret;                                                         
})

If cgroups are enabled, we attach our program to the cgroup and it will be executed at the relevant hook points. To get an idea of the full list of hooks, consult include/uapi/linux/bpf.h and examine the enumerated type “bpf_attach_type” for BPF_CGROUP_* definitions.

5.1 BPF_PROG_TYPE_CGROUP_SKB

  • What do I do with it? Allow or deny network access on IP egress/ingress (BPF_CGROUP_INET_INGRESS/BPF_CGROUP_INET_EGRESS). BPF programs should return 1 to allow access. Any other value results in the function __cgroup_bpf_run_filter_skb() returning -EPERM, which will be propagated to the caller such that the packet is dropped.

  • How do I attach my program? The program is attached to a specific cgroup’s file descriptor.

  • What context is provided? The relevant skb.

  • When does it get run? For inet ingress, sk_filter_trim_cap() (see above) contains a call to BPF_CGROUP_RUN_PROG_INET_INGRESS(sk, skb); if a non-zero value is returned, the error is propogated to the caller (e.g. __sk_receive_skb()) and the packet is discarded and freed. A similar approach is taken on egress, but in ip[6]_finish_output().

5.2 BPF_PROG_TYPE_CGROUP_SOCK

  • What do I do with it? Allow or deny network access at various socket-related events (BPF_CGROUP_INET_SOCK_CREATE, BPF_CGROUP_SOCK_OPS). As above, BPF programs should return 1 to allow access. Any other value results in the function __cgroup_bpf_run_filter_sk() returning -EPERM, which will be propagated to the caller such that the packet is dropped.

  • How do I attach my program? The program is attached to a specific cgroup’s file descriptor.

  • What context is provided? The relevant socket (sk).

  • When does it get run? At socket creation time, in inet_create() we call BPF_CGROUP_RUN_PROG_INET_SOCK() with the socket as argument, and if that function fails, the socket is released.

6. Lightweight tunnel program types.

Lightweight tunnels – https://lwn.net/Articles/650778/ – are a simple way to do tunneling by attaching encapsulation instructions to routes. The examples in the patch description make things clearer: iproute examples (without BPF):

  VXLAN:
  ip route add 40.1.1.1/32 encap vxlan id 10 dst 50.1.1.2 dev vxlan0

  MPLS:
  ip route add 10.1.1.0/30 encap mpls 200 via inet 10.1.1.1 dev swp1

So we’re telling Linux for example that for traffic to 40.1.1.1/32 addresses, we want to encapsulate with a a VXLAN ID of 10 and destination IPv4 address of 50.1.1.2. BPF programs can do the encapsulation on outbound/transmit (inbound packets are readonly). See https://lwn.net/Articles/705609/ for more details. Similarly to tc, iproute eBPF support allows us to attach the eBPF program ELF section directly:

ip route add 192.168.253.2/32 
     encap bpf out obj lwt_len_hist_kern.o section len_hist 
     dev veth0

6.1 BPF_PROG_TYPE_LWT_IN

  • What do I do with it? Examine inbound packets for lightweight tunnel de-encapsulation.

  • How do I attach my program? Via “ip route add”.

# ip route add <route+prefix> encap bpf in obj <bpf object file.o> section <ELF section> dev <device>
  • What context is provided? Pointer to the sk_buff.

  • When does it get run? Via lwtunnel_input() ; that function supports a number of encapsulation types including BPF. The BPF case runs bpf_input in net/core/lwt_bpf.c with redirection disallowed.

6.2 BPF_PROG_TYPE_LWT_OUT

  • What do I do with it? Implement encapsulation for lightweight tunnels for specific destination routes on outbound. Encapsulation is forbidden

  • How do I attach my program? Via “ip route add”:

# ip route add <route+prefix> encap bpf out obj <bpf object file.o> section <ELF section> dev <device>
  • What context is provided? Pointer to the struct __sk_buff

  • When does it get run? Via lwtunnel_output().

6.3 BPF_PROG_TYPE_LWT_XMIT

  • What do I do with it? Implement encapsulation/redirection for lightweight tunnels on transmit.

  • How do I attach my program? Via “ip route add”

# ip route add <route+prefix> encap bpf xmit obj <bpf object file.o> section <ELF section> dev <device>
  • What context is provided? Pointer to the struct __sk_buff

  • When does it get run? Via lwtunnel_xmit().

Summary

So hopefully this roundup of program types was useful. We can see that BPF’s safe in-kernel programmable environment can be used in all sorts of interesting ways! The next thing we will talk about is what BPF helper functions are available within the various program types.

Learning more about BPF

Thanks for reading this installment of our series on BPF. We hope you found it educational and useful. Questions or comments? Use the comments field below!

Stay tuned for the next installment in this series, BPF Helper Functions.

This article originally appeared at Oracle Developers Blog

How Service Meshes Are a Missing Link for Microservices

It is now widely assumed that making the shift to microservices and cloud native environments will only lead to great things — and yet. As the rush to the cloud builds in momentum, many organizations are rudely waking up to more than a few difficulties along the way, such as how Kubernetes and serverless platforms on offer often remain a work in progress.

“We are coming to all those communities and basically pitching them to move, right? We tell them, ‘look, monolithic is very complicated — let’s move to microservices,’ so, they are working very, very hard to move but then they discover that the tooling is not that mature,” Idit Levine, founder and CEO of Solo.io, said. “And actually, there are many gaps in the tooling that they had or used it before and now they’re losing this functionality. For instance, like logging or like debugging microservices is a big problem and so on.” …

One of the first things organizations notice when migrating away from monolithic to microservices environments is how “suddenly you’re losing your observability for all of the applications,” Levine said. “That’s one thing that [service meshes] is really big in solving.”

Read more at The New Stack

Community Demos at ONS to Highlight LFN Project Harmonization and More

A little more than one year since LF Networking (LFN) came together, the project continues to demonstrate strategic growth, with Telstra coming on in November, and the umbrella project representing ~70% of the world’s mobile subscribers. Working side by side with service providers in the technical projects has been critical to ensure that work coming out of LFN is relevant and useful for meeting their requirements. A small sample of these integrations and innovations will be on display once again in the LF Networking Booth at the Open Networking Summit Event, April 3-5 in San Jose, CA. We’re excited to be back in the Bay Area this year and are offering Day Pass and Hall Pass Options. Register today!

Due to demand from the community, we’ve expanded the number of demo stations from 8 to 10 and cover many areas in the open networking stack — with projects from within the LF Networking umbrella (FD,io, OpenDaylight, ONAPOPNFV, and Tungsten Fabric), as well as adjacent projects such as ConsulDockerEnvoyIstioLigatoKubernetesOpenstack, and SkyDive. We welcome you to come spend some time talking to and learning from these experts in the technical community.

The LFN promise of project harmonization will be on display from members iConectiv, Huawei, and Vodafone who will highlight ONAP’s onboarding process for testing Virtual Network Functions (VNFs), integrating with the OPNFV’s Dovetail project, supporting the expanding OPNFV Verification Program (OVP), and paving the way for future compliance and certification testing. Another example of project harmonization is use of common infrastructure for testing across LFN project and the folks from UNH-IOL will be on hand to walk users through the Lab-as-a-Service offering that is facilitating development and testing by hosting hardware and providing access to the community.

Other focus areas include network slicing, service mesh, SDN performance, testing, integration, and analysis.

Listed below is the full networking demo lineup, and you can read detailed descriptions of each demo here.

  • Demonstrating Service Provider Specific Test & Verification Requirements (OPNFV, ONAP) – Proposed By Vodafone and Demonstrated by Vodafone/Huawei/iConectiv
  • Integration of ODL and Network Service Mesh (OpenDaylight, Network Service Mesh) – Presented by Lumina Networks
  • SDN Performance Comparison Between Multi Data-paths (Tungsten Fabric, FD.io) – Presented by ATS, and Sofioni Networks
  • ONAP Broadband Service Use Case (ONAP, OPNFV) – Presented by Swisscom, Huawei, and Nokia
  • OpenStack Rolling Upgrade with Zero Downtime to Application (OPNFV, OpenStack) – Presented By Nokia and Deutsche Telekom
  • Skydive & VPP Integration: A Topology Analyzer (FD.io, Ligato, Skydive, Docker) – Presented by PANTHEON.tech
  • VSPERF: Beyond Performance Metrics, Towards Causation Analysis (OPNFV) – Presented by Spirent)
  • Network Slice with ONAP based Life Cycle Management (ONAP) – Presented by AT&T, and Ericsson
  • Service Mesh and SDN: At-Scale Cluster Load Balancing (Tungsten Fabric, Linux OS, Istio, Consul, Envoy, Kubernetes, HAProxy) – Presented by CloudOps, Juniper and the Tungsten Fabric Community
  • Lab as a Service 2.0 Demo (OPNFV) – Presented by UNH-IOL

We hope to see you at the show! Register today!

This article originally appeared at Linux Foundation

Kubernetes Setup Using Ansible and Vagrant

This blog post describes the steps required to set up a multi node Kubernetes cluster for development purposes. This setup provides a production-like cluster that can be set up on your local machine.

Why do we require multi node cluster setup?

Multi node Kubernetes clusters offer a production-like environment which has various advantages. Even though Minikube provides an excellent platform for getting started, it doesn’t provide the opportunity to work with multi node clusters which can help solve problems or bugs that are related to application design and architecture. For instance, Ops can reproduce an issue in a multi node cluster environment, Testers can deploy multiple versions of an application for executing test cases and verifying changes. These benefits enable teams to resolve issues faster which make them more agile.

Why use Vagrant and Ansible?

Vagrant is a tool that will allow us to create a virtual environment easily and it eliminates pitfalls that cause the works-on-my-machine phenomenon. It can be used with multiple providers such as Oracle VirtualBox, VMware, Docker, and so on. It allows us to create a disposable environment by making use of configuration files.

Ansible is an infrastructure automation engine that automates software configuration management. It is agentless and allows us to use SSH keys for connecting to remote machines. Ansible playbooks are written in yaml and offer inventory management in simple text files.

Prerequisites

  • Vagrant should be installed on your machine. Installation binaries can be found here.
  • Oracle VirtualBox can be used as a Vagrant provider or make use of similar providers as described in Vagrant’s official documentation.
  • Ansible should be installed in your machine. Refer to the Ansible installation guide for platform-specific installation.

Read more at Kubernetes.io

A Linux Sysadmin’s Guide to Network Management, Troubleshooting and Debugging

A system administrator’s routine tasks include configuring, maintaining, troubleshooting, and managing servers and networks within data centers. There are numerous tools and utilities in Linux designed for administrative purposes.

In this article, we will review some of the most used command-line tools and utilities for network management in Linux, under different categories. We will explain some common usage examples, which will make network management much easier in Linux.

Network Configuration, Troubleshooting and Debugging Tools

1. ifconfig Command

ifconfig is a command line interface tool for network interface configuration and also used to initialize an interface at system boot time. Once a server is up and running, it can be used to assign an IP Address to an interface and enable or disable the interface on demand.

It is also used to view the status IP Address, Hardware / MAC address, as well as MTU (Maximum Transmission Unit) size of the currently active interfaces. ifconfig is thus useful for debugging or performing system tuning.

Here is an example to display the status of all active network interfaces.

Read more at Tecmint