Home Blog Page 273

Let your Engineers Choose the License: A Guide

Each open source project and community is unique and there are social aspects to these communities that may have preferences towards various licensing philosophies (e.g., copyleft or permissive). Engineers working in those communities understand all these issues and are best equipped to choose the proper license on this knowledge. Mandating certain licenses for code contributions often will conflict with these community norms and result in reduction or prohibition in contributed content.

For example, perhaps your organization believes that the latest GPL license (GPLv3) is the best for your company due to its updated provisions. If you mandated GPLv3 for all future contributions vs. GPLv2, you would be prohibited from contributing code to the Linux kernel, since that is a GPLv2 project and will likely remain that way for a very long time. Your engineers, being part of that open source community project, would know that and would automatically choose GPLv2 in the absence of such a mandate.

Bottom line: Enabling engineers to make these decisions is wise and efficient.

To the extent your organization may have to restrict the use of certain licenses (e.g., due to certain intellectual property concerns), this should naturally be part of your guidelines or policy.

Read more at OpenSource.com

Embedded Linux Software Highlights from Embedded World

In my day job at LinuxGizmos, I’ve been neck deep recently in embedded Linux hardware news from the Embedded World show in Nuremberg. There are plenty of new SBCs and compute modules — many based on NXP’s newly shipping i.MX8M Mini — as well as a new Qualcomm Robotics RB3 Platform, more IoT gateways, and Linux-ready chips like ST’s STM32MP1 and Octavo SiP version of the SoC.

Yet, Embedded World has produced some embedded Linux software news, as well. Here we take a brief look at some highlights, including Google open sourcing its Cloud IoT Device SDK, the Linux Foundation launching an ELISA project for open source safety-critical systems, and a new long-term kernel from the Civil Infrastructure Platform project.

In other news, Siemens has spun a Debian-based binary version of Mentor Embedded Linux (MEL), and AMD and Advantech are collaborating with Mentor to develop a machine-learning savvy implementation of MEL. Finally, Wind River announced a “Helix Platform” that combines Wind River Linux and VxWorks, and MontaVista has launched MontaVista Carrier Grade eXpress 2.6.

Google releases open source Device SDK

Google has released a Cloud IoT Device SDK under open source license designed to connect microcontroller devices and IoT-oriented Linux gizmos to its Google Cloud IoT platform. The SDK can be considered a lower-end, MCU endpoint-oriented counterpart to its Linux-focused Cloud IoT Edge stack for IoT gateways that integrate Google’s AI-accelerating Cloud TPU chips.

The Cloud IoT Device SDK comprises client libraries written in Embedded C to “enable developers to securely connect, provision, and manage devices with Cloud IoT Core,” says Google. Target devices range from handhelds to low-end smart home devices. OS support includes Zephyr, Mbed OS, FreeRTOS, and POSIX-compliant platforms like Linux. Early partners include Arm, Cypress, Nordic, Espressif, Microchip, and NXP.

The open source release presents an alternative strategy to Google’s proprietary, higher-end Android Things IoT platform. Google recently announced that Android Things would be limited to OEM partners developing smart speakers and displays with Google Assistant.

Linux Foundation launches ELISA safety-critical project

The Linux Foundation, which this week welcomed 34 new members including HP, also announced a project called Enabling Linux in Safety Applications (ELISA) to develop open source tools and processes that help companies build and certify Linux-based safety-critical applications and systems. Targeted applications include robotics, medical, smart factories, transportation, and autonomous cars.

ELISA is building on work done by the SIL2LinuxMP project from the Open Source Automation Development Lab (OSADL), as well as the Linux Foundation’s Real-Time Linux project. Founding ELISA members include Arm, BMW Car IT GmbH, Linutronix, and Toyota, which is a leading member of the LF’s Automotive Grade Linux project. The roster also includes new LF member and robotics manufacturer KUKA.

ELISA project goals include working with certification authorities and standardization bodies “to establish how Linux can be used as a component in safety-critical systems.” The project will develop safety-related reference documentation and use cases, educate and collaborate with the open source community, provide members with incident and hazard monitoring of critical components, and encourage best practices.

CIP releases first SLTS kernel

ELISA is related to the LF’s Civil Infrastructure Platform (CIP) project, which this week announced the release of its promised Super Long Term Support (SLTS) Linux Kernel with 64-bit Arm support. The key enhancement of the SLTS kernel is its unprecedented 10-year plus support. The kernel is also designed for the higher safety, security, and reliability requirements of large infrastructure and industrial applications.

The CIP project also announced two new working groups. The first is a Software Update Working Group led by Toshiba. The second is a Security Working Group led by Renesas, whose new RZ-G2 SoCs are the first to support the SLTS.

Mentor Embedded Linux goes binary

Like Wind River Linux and MontaVista, Mentor Graphics’ Mentor Embedded Linux (MEL) has been one of the leading commercial embedded Linux distros. It is also similarly based on Yocto Project code. Now, almost two years after Siemens acquired Mentor, Siemens PLM Software has announced a new version of MEL that ditches the Yocto foundation for Debian. The distro, which melds MEL with an inhouse Debian stack designed for Siemens automation equipment, is available as an “enterprise-class” binary.

Because it can load as a simple binary, the new Siemens enterprise version of MEL is easier to install and use than the Yocto-based version, claims Siemens. (The Yocto version will continue to be available.)

Siemens partner Xilinx is also sold on the binary approach: “By combining the capabilities of an embedded Linux distribution with those from the Debian binary desktop Linux distribution, today’s developers — many of whom have honed their skills in the Linux desktop development — can easily extend those same skills into fully featured embedded systems,” stated Simon George, director of system software and SoC Solution Marketing, Xilinx.

The new Linux solution provides a stable kernel, a robust toolchain, broad community support, secure field updates, and application isolation, says Siemens. It offers up-to-date cloud support and familiar MEL features such as Sourcery Analyzer tools. Improved multi-core support enables heterogeneous systems that also run Mentor’s Nucleus RTOS.

AMD and Advantech collaborate on ML-focused MEL version

In other MEL news, AMD, Advantech, and Mentor announced a customized version of MEL that runs on Advantech’s SOM-5871 compute module based on AMD’s Ryzen Embedded V1000 SoC. The solution will “make it easier for customers to implement machine vision applications within their IoT or edge compute ecosystem, helping to improve efficiency and accuracy of machine vision solutions,” says AMD. The chipmaker hints that the platform will align with the LF’s EdgeX Foundry project for edge computing.

Wind River goes cross-platform with Helix Platform

Wind River, which is no longer owned by Intel, has unveiled a Wind River Helix Virtualization Platform, an umbrella framework that integrates both Wind River Linux and the company’s VxWorks RTOS. The Helix Platform provides an integrated edge compute platform for applications ranging from industrial infrastructure to autonomous driving.

Helix Platform uses Wind River Hypervisor to enable time and space partitioning that leverages RTOS and virtualization technology, safety functionality, and COTS certification. Linux, VxWorks, and even third-party OSes such as Windows and Android can coexist together on multi-processor and multi-core systems, all orchestrated by the common Helix Cloud platform.

MontaVista unveils CGX 2.6

Finally, MontaVista has announced version 2.6 of its MontaVista Carrier Grade eXpress (CGX), the 12th generation of its Carrier Grade Linux certified distribution. Like Wind River Linux and the original MEL, CGX is a commercial embedded distro based on Yocto Project code and aimed at industrial and networking customers.

Due for release in mid-2019 with BSPs for x86 and ARMv8, MontaVista CGX 2.6 is based on Yocto 2.6, Linux kernel 4.19, and GCC 8.2 toolchain. Highlights include improved security features such as OpenSSL FIPS, OPTEE/Trustzone, Secure Boot, and SWUpdate.

CGX 2.6 provides protocol support for BLE, 4G/LTE, Zigbee, LoRA, CANbus, Modbus, and Profibus. Cloud support has been updated with APIs for the latest Amazon AWS IoT, Microsoft Azure IoT, Google Cloud IoT, and ARM mBed Client. Naturally, Kubernetes is also supported.

MontaVista was instrumental in the early development of embedded Linux, was owned by networking chip maker Cavium for several years before being spun back out as an independent company when Marvell acquired Cavium. Like its old rival Wind River, MontaVista is once again unhitched and ready for action.

Kubernetes Warms Up to IPv6

There’s a finite number of public IPv4 addresses and the IPv6 address space was specified to solve this problem some 20 years ago, long before Kubernetes was conceived of. But because it was originally developed inside Google and it’s only relatively recently that cloud services like Google and AWS have started to support IPv6 at all, Kubernetes started out with only IPv4 support.

As of 2017, three were only 38 million new IPv4 addresses available to be allocated by registrars worldwide (none of those are in the US, so anyone needing more IPv4 addresses has to find someone willing to sell ones they’re not using).

That means even enterprises who are slower to move off IPv4 because they can deal with the address shortage using technologies like NAT can will run into problems, Tim Hockin, principal software engineer at Google Cloud told the New Stack. “Kubernetes makes very liberal use of IP addresses (one IP per Pod), which simplifies the system and makes it easier to use and comprehend. For very large installations this can be difficult with IPv4.

Read more at The New Stack

New Elisa Project Focuses on Linux In Safety-Critical Systems

The project is called Elisa, for “Enabling Linux in Safety Applications,” and it’s aim is to create a shared set of tools and processes for building Linux-based systems that will operate without surprises in situations where failure could cause injury, loss of life, or result in significant property or environmental damage.

These days computers are being used to perform a long and growing list of tasks that can have serious consequences if something goes wrong. This includes light rail systems where the trains often drive themselves, robotic devices, medical devices, and smart factories where potentially dangerous tasks are directed by single board computers spitting out X’s and O’s.

Read more at Data Center Knowledge

MariaDB Readies New Enterprise Server

MariaDB Corp announced it’s releasing a new version of its MariaDBMySQL-compatible database management system (DBMS), MariaDB Enterprise Server 10.4. This new business server comes with more powerful and fine-grained auditing, faster, highly reliable backups for large databases, and end-to-end encryption for all data at rest in MariaDB clusters. This is the MariaDB for demanding companies that want the best possible DBMS.

MariaDB Enterprise Server, which will be released in 2019’s second quarter, remains fully open source. Going forward, it will be the default version for MariaDB Platform on-prem or in the cloud customers. MariaDB CEO Michael Howard wants to make sure that MariaDB users and developers know that  MariaDB Community Server is not becoming a second-class citizen.

Read more at ZDNet

All about {Curly Braces} in Bash

At this stage of our Bash basics series, it would be hard not to see some crossover between topics. For example, you have already seen a lot of brackets in the examples we have shown over the past several weeks, but the focus has been elsewhere.

For the next phase of the series, we’ll take a closer look at brackets, curly, curvy, or straight, how to use them, and what they do depending on where you use them. We will also tackle other ways of enclosing things, like when to use quotes, double-quotes, and backquotes.

This week, we’re looking at curly brackets or braces: {}.

Array Builder

You have already encountered curly brackets before in The Meaning of Dot. There, the focus was on the use of the dot/period (.), but using braces to build a sequence was equally important.

As we saw then:


echo {0..10}

prints out the numbers from 0 to 10. Using:


echo {10..0}

prints out the same numbers, but in reverse order. And,


echo {10..0..2}

prints every second number, starting with 10 and making its way backwards to 0.

Then,


echo {z..a..2}

prints every second letter, starting with z and working its way backwards until a.

And so on and so forth.

Another thing you can do is combine two or more sequences:


echo {a..z}{a..z}

This prints out all the two letter combinations of the alphabet, from aa to zz.

Is this useful? Well, actually it is. You see, arrays in Bash are defined by putting elements between parenthesis () and separating each element using a space, like this:


month=("Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec")

To access an element within the array, you use its index within brackets []:


$ echo ${month[3]} # Array indexes start at [0], so [3] points to the fourth item

Apr

You can accept all those brackets, parentheses, and braces on faith for a moment. We’ll talk about them presently.

Notice that, all things being equal, you can create an array with something like this:


letter_combos=({a..z}{a..z})

and letter_combos points to an array that contains all the 2-letter combinations of the entire alphabet.

You can also do this:


dec2bin=({0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1})

This last one is particularly interesting because dec2bin now contains all the binary numbers for an 8-bit register, in ascending order, starting with 00000000, 00000001, 00000010, etc., until reaching 11111111. You can use this to build yourself an 8-bit decimal-to-binary converter. Say you want to know what 25 is in binary. You can do this:


$ echo ${dec2bin[25]}

00011001

Yes, there are better ways of converting decimal to binary as we saw in the article where we discussed & as a logical operator, but it is still interesting, right?

Parameter expansion

Getting back to


echo ${month[3]}

Here the braces {} are not being used as apart of a sequence builder, but as a way of generating parameter expansion. Parameter expansion involves what it says on the box: it takes the variable or expression within the braces and expands it to whatever it represents.

In this case, month is the array we defined earlier, that is:


month=("Jan" "Feb" "Mar" "Apr" "May" "Jun" "Jul" "Aug" "Sep" "Oct" "Nov" "Dec")

And, item 3 within the array points to "Apr" (remember: the first index in an array in Bash is [0]). That means that echo ${month[3]}, after the expansion, translates to echo "Apr".

Interpreting a variable as its value is one way of expanding it, but there are a few more you can leverage. You can use parameter expansion to manipulate what you read from variable, say, by cutting a chunk off the end.

Suppose you have a variable like:


a="Too longgg"

The command:


echo ${a%gg}

chops off the last two gs and prints “Too long“.

Breaking this down,

  • ${...} tells the shell to expand whatever is inside it
  • a is the variable you are working with
  • % tells the shell you want to chop something off the end of the expanded variable (“Too longgg”)
  • and gg is what you want to chop off.

This can be useful for converting files from one format to another. Allow me to explain with a slight digression:

ImageMagick is a set of command line tools that lets you manipulate and modify images. One of its most useful tools ImageMagick comes with is convert. In its simplest form convert allows you to, given an image in a certain format, make a copy of it in another format.

The following command takes a JPEG image called image.jpg and creates a PNG copy called image.png:


convert image.jpg image.png

ImageMagick is often pre-installed on most Linux distros. If you can’t find it, look for it in your distro’s software manager.

Okay, end of digression. On to the example:

With variable expansion, you can do the same as shown above like this:


i=image.jpg

convert $i ${i%jpg}png

What you are doing here is chopping off the extension jpg from i and then adding png, making the command convert image.jpg image.png.

You may be wondering how this is more useful than just writing in the name of the file. Well, when you have a directory containing hundreds of JPEG images, you need to convert to PNG, run the following in it:


for i in *.jpg; do convert $i ${i%jpg}png; done

… and, hey presto! All the pictures get converted automatically.

If you need to chop off a chunk from the beginning of a variable, instead of %, use #:


$ a="Hello World!"

$ echo Goodbye${a#Hello}

Goodbye World!

There’s quite a bit more to parameter expansion, but a lot of it makes sense only when you are writing scripts. We’ll explore more on that topic later in this series.

Output Grouping

Meanwhile, let’s finish up with something simple: you can also use { ... } to group the output from several commands into one big blob. The command:


echo "I found all these PNGs:"; find . -iname "*.png"; echo "Within this bunch of files:"; ls > PNGs.txt

will execute all the commands but will only copy into the PNGs.txt file the output from the last ls command in the list. However, doing


{ echo "I found all these PNGs:"; find . -iname "*.png"; echo "Within this bunch of files:"; ls; } > PNGs.txt

creates the file PNGs.txt with everything, starting with the line “I found all these PNGs:“, then the list of PNG files returned by find, then the line “Within this bunch of files:” and finishing up with the complete list of files and directories within the current directory.

Notice that there is space between the braces and the commands enclosed within them. That’s because { and } are reserved words here, commands built into the shell. They would roughly translate to “group the outputs of all these commands together” in plain English.

Also notice that the list of commands has to end with a semicolon (;) or the whole thing will bork.

Next Time

In our next installment, we’ll be looking at more things that enclose other things, but of different shapes. Until then, have fun!

Read more:

And, Ampersand, and & in Linux

Ampersands and File Descriptors in Bash

Logical & in Bash

Open Source Maintainers Want to Reduce Application Security Risk

According to Snyk’s “State of Open Source Security Report 2019,” which surveyed over 500 open source users and maintainers, 30 percent of developers that maintain open source (OS) projects are highly confident in their security knowledge, which is up from 17 percent the year before. In addition, the percentage of OS maintainers that run security audits on their projects has risen twenty percentage points to 74 percent as compared to last year’s survey. Yet, only 42 percent of maintainers are auditing their code at least once a quarter. This is a problem because the goals for development velocity are so much higher than just a few years ago.

The New Stack and Linux Foundation’s survey of open source leaders found that the average development team was releasing code into production at more than two-thirds of companies. Other studies are less optimistic and indicate that only about a quarter of companies have reached that level of speed.

Read more at The New Stack

 

Linux Security: Cmd Provides Visibility, Control Over User Activity

There’s a new Linux security tool you should be aware of — Cmd (pronounced “see em dee”) dramatically modifies the kind of control that can be exercised over Linux users. It reaches way beyond the traditional configuration of user privileges and takes an active role in monitoring and controlling the commands that users are able to run on Linux systems.

Provided by a company of the same name, Cmd focuses on cloud usage. Given the increasing number of applications being migrated into cloud environments that rely on Linux, gaps in the available tools make it difficult to adequately enforce required security. However, Cmd can also be used to manage and protect on-premises systems.

Read more at Network World

Introduction to YAML: Creating a Kubernetes Deployment

There’s an easier and more useful way to use Kubernetes to spin up resources outside of the command line: creating configuration files using YAML. In this article, we’ll look at how YAML works and use it to define first a Kubernetes Pod, and then a Kubernetes Deployment.

YAML Basics

It’s difficult to escape YAML if you’re doing anything related to many software fields — particularly Kubernetes, SDN, and OpenStack. YAML, which stands for Yet Another Markup Language, or YAML Ain’t Markup Language (depending who you ask) is a human-readable text-based format for specifying configuration-type information. For example, in this article, we’ll pick apart the YAML definitions for creating first a Pod, and then a Deployment.

Using YAML for K8s definitions gives you a number of advantages, including:

  • Convenience: You’ll no longer have to add all of your parameters to the command line
  • Maintenance: YAML files can be added to source control, so you can track changes
  • Flexibility: You’ll be able to create much more complex structures using YAML than you can on the command line

YAML is a superset of JSON, which means that any valid JSON file is also a valid YAML file. So on the one hand, if you know JSON and you’re only ever going to write your own YAML (as opposed to reading other people’s) you’re all set. On the other hand, that’s not very likely, unfortunately. Even if you’re only trying to find examples on the web, they’re most likely in (non-JSON) YAML, so we might as well get used to it.  Still, there may be situations where the JSON format is more convenient, so it’s good to know that it’s available to you.

Read more at CNCF

Protecting Code Integrity with PGP — Part 1: Basic Concepts and Tools

Learn PGP basics and best practices in this series of tutorials from our archives. 

In this article series, we take an in-depth look at using PGP to ensure the integrity of software. These articles will provide practical guidelines aimed at developers working on free software projects and will cover the following topics:

  1. PGP basics and best practices

  2. How to use PGP with Git

  3. How to protect your developer accounts

We use the term “Free” as in “Freedom,” but the guidelines set out in this series can also be used for any other kind of software that relies on contributions from a distributed team of developers. If you write code that goes into public source repositories, you can benefit from getting acquainted with and following this guide.

Structure

Each section is split into two areas:

  • The checklist that can be adapted to your project’s needs

  • Free-form list of considerations that explain what dictated these decisions, together with configuration instructions

Checklist priority levels

The items in each checklist include the priority level, which we hope will help guide your decision:

  • (ESSENTIAL) items should definitely be high on the consideration list. If not implemented, they will introduce high risks to the code that gets committed to the open-source project.

  • (NICE) to have items will improve the overall security, but will affect how you interact with your work environment, and probably require learning new habits or unlearning old ones.

Remember, these are only guidelines. If you feel these priority levels do not reflect your project’s commitment to security, you should adjust them as you see fit.

Basic PGP concepts and tools

Checklist

  1. Understand the role of PGP in Free Software Development (ESSENTIAL)

  2. Understand the basics of Public Key Cryptography (ESSENTIAL)

  3. Understand PGP Encryption vs. Signatures (ESSENTIAL)

  4. Understand PGP key identities (ESSENTIAL)

  5. Understand PGP key validity (ESSENTIAL)

  6. Install GnuPG utilities (version 2.x) (ESSENTIAL)

Considerations

The Free Software community has long relied on PGP for assuring the authenticity and integrity of software products it produced. You may not be aware of it, but whether you are a Linux, Mac or Windows user, you have previously relied on PGP to ensure the integrity of your computing environment:

  • Linux distributions rely on PGP to ensure that binary or source packages have not been altered between when they have been produced and when they are installed by the end-user.

  • Free Software projects usually provide detached PGP signatures to accompany released software archives, so that downstream projects can verify the integrity of downloaded releases before integrating them into their own distributed downloads.

  • Free Software projects routinely rely on PGP signatures within the code itself in order to track provenance and verify integrity of code committed by project developers.

This is very similar to developer certificates/code signing mechanisms used by programmers working on proprietary platforms. In fact, the core concepts behind these two technologies are very much the same — they differ mostly in the technical aspects of the implementation and the way they delegate trust. PGP does not rely on centralized Certification Authorities, but instead lets each user assign their own trust to each certificate.

Our goal is to get your project on board using PGP for code provenance and integrity tracking, following best practices and observing basic security precautions.

Extremely Basic Overview of PGP operations

You do not need to know the exact details of how PGP works — understanding the core concepts is enough to be able to use it successfully for our purposes. PGP relies on Public Key Cryptography to convert plain text into encrypted text. This process requires two distinct keys:

  • A public key that is known to everyone

  • A private key that is only known to the owner

Encryption

For encryption, PGP uses the public key of the owner to create a message that is only decryptable using the owner’s private key:

  1. The sender generates a random encryption key (“session key”)

  2. The sender encrypts the contents using that session key (using a symmetric cipher)

  3. The sender encrypts the session key using the recipient’s public PGP key

  4. The sender sends both the encrypted contents and the encrypted session key to the recipient

To decrypt:

  1. The recipient decrypts the session key using their private PGP key

  2. The recipient uses the session key to decrypt the contents of the message

Signatures

For creating signatures, the private/public PGP keys are used the opposite way:

  1. The signer generates the checksum hash of the contents

  2. The signer uses their own private PGP key to encrypt that checksum

  3. The signer provides the encrypted checksum alongside the contents

To verify the signature:

  1. The verifier generates their own checksum hash of the contents

  2. The verifier uses the signer’s public PGP key to decrypt the provided checksum

  3. If the checksums match, the integrity of the contents is verified

Combined usage

Frequently, encrypted messages are also signed with the sender’s own PGP key. This should be the default whenever using encrypted messaging, as encryption without authentication is not very meaningful (unless you are a whistleblower or a secret agent and need plausible deniability).

Understanding Key Identities

Each PGP key must have one or multiple Identities associated with it. Usually, an “Identity” is the person’s full name and email address in the following format:

Alice Engineer <alice.engineer@example.com>

Sometimes it will also contain a comment in brackets, to tell the end-user more about that particular key:

Bob Designer (obsolete 1024-bit key) <bob.designer@example.com>

Since people can be associated with multiple professional and personal entities, they can have multiple identities on the same key:

Alice Engineer <alice.engineer@example.com>
Alice Engineer <aengineer@personalmail.example.org>
Alice Engineer <webmaster@girlswhocode.example.net>

When multiple identities are used, one of them would be marked as the “primary identity” to make searching easier.

Understanding Key Validity

To be able to use someone else’s public key for encryption or verification, you need to be sure that it actually belongs to the right person (Alice) and not to an impostor (Eve). In PGP, this certainty is called “key validity:”

  • Validity: full — means we are pretty sure this key belongs to Alice

  • Validity: marginal — means we are somewhat sure this key belongs to Alice

  • Validity: unknown — means there is no assurance at all that this key belongs to Alice

Web of Trust (WOT) vs. Trust on First Use (TOFU)

PGP incorporates a trust delegation mechanism known as the “Web of Trust.” At its core, this is an attempt to replace the need for centralized Certification Authorities of the HTTPS/TLS world. Instead of various software makers dictating who should be your trusted certifying entity, PGP leaves this responsibility to each user.

Unfortunately, very few people understand how the Web of Trust works, and even fewer bother to keep it going. It remains an important aspect of the OpenPGP specification, but recent versions of GnuPG (2.2 and above) have implemented an alternative mechanism called “Trust on First Use” (TOFU).

You can think of TOFU as “the SSH-like approach to trust.” With SSH, the first time you connect to a remote system, its key fingerprint is recorded and remembered. If the key changes in the future, the SSH client will alert you and refuse to connect, forcing you to make a decision on whether you choose to trust the changed key or not.

Similarly, the first time you import someone’s PGP key, it is assumed to be trusted. If at any point in the future GnuPG comes across another key with the same identity, both the previously imported key and the new key will be marked as invalid and you will need to manually figure out which one to keep.

In this guide, we will be using the TOFU trust model.

Installing OpenPGP software

First, it is important to understand the distinction between PGP, OpenPGP, GnuPG and gpg:

  • PGP (“Pretty Good Privacy”) is the name of the original commercial software

  • OpenPGP is the IETF standard compatible with the original PGP tool

  • GnuPG (“Gnu Privacy Guard”) is free software that implements the OpenPGP standard

  • The command-line tool for GnuPG is called “gpg”

Today, the term “PGP” is almost universally used to mean “the OpenPGP standard,” not the original commercial software, and therefore “PGP” and “OpenPGP” are interchangeable. The terms “GnuPG” and “gpg” should only be used when referring to the tools, not to the output they produce or OpenPGP features they implement. For example:

  • PGP (not GnuPG or GPG) key

  • PGP (not GnuPG or GPG) signature

  • PGP (not GnuPG or GPG) keyserver

Understanding this should protect you from an inevitable pedantic “actually” from other PGP users you come across.

Installing GnuPG

If you are using Linux, you should already have GnuPG installed. On a Mac, you should install GPG-Suite or you can use brew install gnupg2. On a Windows PC, you should install GPG4Win, and you will probably need to adjust some of the commands in the guide to work for you, unless you have a unix-like environment set up. For all other platforms, you’ll need to do your own research to find the correct places to download and install GnuPG.

GnuPG 1 vs. 2

Both GnuPG v.1 and GnuPG v.2 implement the same standard, but they provide incompatible libraries and command-line tools, so many distributions ship both the legacy version 1 and the latest version 2. You need to make sure you are always using GnuPG v.2.

First, run:

$ gpg --version | head -n1

If you see gpg (GnuPG) 1.4.x, then you are using GnuPG v.1. Try the gpg2 command:

$ gpg2 --version | head -n1

If you see gpg (GnuPG) 2.x.x, then you are good to go. This guide will assume you have the version 2.2 of GnuPG (or later). If you are using version 2.0 of GnuPG, some of the commands in this guide will not work, and you should consider installing the latest 2.2 version of GnuPG.

Making sure you always use GnuPG v.2

If you have both gpg and gpg2 commands, you should make sure you are always using GnuPG v2, not the legacy version. You can make sure of this by setting the alias:

$ alias gpg=gpg2

You can put that in your .bashrc to make sure it’s always loaded whenever you use the gpg commands. 

In part 2 of this series, we will explain the basic steps for generating and protecting your master PGP key. 

Read more:

Part 1: Basic Concepts and Tools

Part 2: Generating Your Master Key

Part 3: Generating PGP Subkeys

Part 4: Moving Your Master Key to Offline Storage

Part 5: Moving Subkeys to a Hardware Device

Part 6: Using PGP with Git

Part 7: Protecting Online Accounts

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.