Home Blog Page 274

Protecting Code Integrity with PGP — Part 1: Basic Concepts and Tools

Learn PGP basics and best practices in this series of tutorials from our archives. 

In this article series, we take an in-depth look at using PGP to ensure the integrity of software. These articles will provide practical guidelines aimed at developers working on free software projects and will cover the following topics:

  1. PGP basics and best practices

  2. How to use PGP with Git

  3. How to protect your developer accounts

We use the term “Free” as in “Freedom,” but the guidelines set out in this series can also be used for any other kind of software that relies on contributions from a distributed team of developers. If you write code that goes into public source repositories, you can benefit from getting acquainted with and following this guide.

Structure

Each section is split into two areas:

  • The checklist that can be adapted to your project’s needs

  • Free-form list of considerations that explain what dictated these decisions, together with configuration instructions

Checklist priority levels

The items in each checklist include the priority level, which we hope will help guide your decision:

  • (ESSENTIAL) items should definitely be high on the consideration list. If not implemented, they will introduce high risks to the code that gets committed to the open-source project.

  • (NICE) to have items will improve the overall security, but will affect how you interact with your work environment, and probably require learning new habits or unlearning old ones.

Remember, these are only guidelines. If you feel these priority levels do not reflect your project’s commitment to security, you should adjust them as you see fit.

Basic PGP concepts and tools

Checklist

  1. Understand the role of PGP in Free Software Development (ESSENTIAL)

  2. Understand the basics of Public Key Cryptography (ESSENTIAL)

  3. Understand PGP Encryption vs. Signatures (ESSENTIAL)

  4. Understand PGP key identities (ESSENTIAL)

  5. Understand PGP key validity (ESSENTIAL)

  6. Install GnuPG utilities (version 2.x) (ESSENTIAL)

Considerations

The Free Software community has long relied on PGP for assuring the authenticity and integrity of software products it produced. You may not be aware of it, but whether you are a Linux, Mac or Windows user, you have previously relied on PGP to ensure the integrity of your computing environment:

  • Linux distributions rely on PGP to ensure that binary or source packages have not been altered between when they have been produced and when they are installed by the end-user.

  • Free Software projects usually provide detached PGP signatures to accompany released software archives, so that downstream projects can verify the integrity of downloaded releases before integrating them into their own distributed downloads.

  • Free Software projects routinely rely on PGP signatures within the code itself in order to track provenance and verify integrity of code committed by project developers.

This is very similar to developer certificates/code signing mechanisms used by programmers working on proprietary platforms. In fact, the core concepts behind these two technologies are very much the same — they differ mostly in the technical aspects of the implementation and the way they delegate trust. PGP does not rely on centralized Certification Authorities, but instead lets each user assign their own trust to each certificate.

Our goal is to get your project on board using PGP for code provenance and integrity tracking, following best practices and observing basic security precautions.

Extremely Basic Overview of PGP operations

You do not need to know the exact details of how PGP works — understanding the core concepts is enough to be able to use it successfully for our purposes. PGP relies on Public Key Cryptography to convert plain text into encrypted text. This process requires two distinct keys:

  • A public key that is known to everyone

  • A private key that is only known to the owner

Encryption

For encryption, PGP uses the public key of the owner to create a message that is only decryptable using the owner’s private key:

  1. The sender generates a random encryption key (“session key”)

  2. The sender encrypts the contents using that session key (using a symmetric cipher)

  3. The sender encrypts the session key using the recipient’s public PGP key

  4. The sender sends both the encrypted contents and the encrypted session key to the recipient

To decrypt:

  1. The recipient decrypts the session key using their private PGP key

  2. The recipient uses the session key to decrypt the contents of the message

Signatures

For creating signatures, the private/public PGP keys are used the opposite way:

  1. The signer generates the checksum hash of the contents

  2. The signer uses their own private PGP key to encrypt that checksum

  3. The signer provides the encrypted checksum alongside the contents

To verify the signature:

  1. The verifier generates their own checksum hash of the contents

  2. The verifier uses the signer’s public PGP key to decrypt the provided checksum

  3. If the checksums match, the integrity of the contents is verified

Combined usage

Frequently, encrypted messages are also signed with the sender’s own PGP key. This should be the default whenever using encrypted messaging, as encryption without authentication is not very meaningful (unless you are a whistleblower or a secret agent and need plausible deniability).

Understanding Key Identities

Each PGP key must have one or multiple Identities associated with it. Usually, an “Identity” is the person’s full name and email address in the following format:

Alice Engineer <alice.engineer@example.com>

Sometimes it will also contain a comment in brackets, to tell the end-user more about that particular key:

Bob Designer (obsolete 1024-bit key) <bob.designer@example.com>

Since people can be associated with multiple professional and personal entities, they can have multiple identities on the same key:

Alice Engineer <alice.engineer@example.com>
Alice Engineer <aengineer@personalmail.example.org>
Alice Engineer <webmaster@girlswhocode.example.net>

When multiple identities are used, one of them would be marked as the “primary identity” to make searching easier.

Understanding Key Validity

To be able to use someone else’s public key for encryption or verification, you need to be sure that it actually belongs to the right person (Alice) and not to an impostor (Eve). In PGP, this certainty is called “key validity:”

  • Validity: full — means we are pretty sure this key belongs to Alice

  • Validity: marginal — means we are somewhat sure this key belongs to Alice

  • Validity: unknown — means there is no assurance at all that this key belongs to Alice

Web of Trust (WOT) vs. Trust on First Use (TOFU)

PGP incorporates a trust delegation mechanism known as the “Web of Trust.” At its core, this is an attempt to replace the need for centralized Certification Authorities of the HTTPS/TLS world. Instead of various software makers dictating who should be your trusted certifying entity, PGP leaves this responsibility to each user.

Unfortunately, very few people understand how the Web of Trust works, and even fewer bother to keep it going. It remains an important aspect of the OpenPGP specification, but recent versions of GnuPG (2.2 and above) have implemented an alternative mechanism called “Trust on First Use” (TOFU).

You can think of TOFU as “the SSH-like approach to trust.” With SSH, the first time you connect to a remote system, its key fingerprint is recorded and remembered. If the key changes in the future, the SSH client will alert you and refuse to connect, forcing you to make a decision on whether you choose to trust the changed key or not.

Similarly, the first time you import someone’s PGP key, it is assumed to be trusted. If at any point in the future GnuPG comes across another key with the same identity, both the previously imported key and the new key will be marked as invalid and you will need to manually figure out which one to keep.

In this guide, we will be using the TOFU trust model.

Installing OpenPGP software

First, it is important to understand the distinction between PGP, OpenPGP, GnuPG and gpg:

  • PGP (“Pretty Good Privacy”) is the name of the original commercial software

  • OpenPGP is the IETF standard compatible with the original PGP tool

  • GnuPG (“Gnu Privacy Guard”) is free software that implements the OpenPGP standard

  • The command-line tool for GnuPG is called “gpg”

Today, the term “PGP” is almost universally used to mean “the OpenPGP standard,” not the original commercial software, and therefore “PGP” and “OpenPGP” are interchangeable. The terms “GnuPG” and “gpg” should only be used when referring to the tools, not to the output they produce or OpenPGP features they implement. For example:

  • PGP (not GnuPG or GPG) key

  • PGP (not GnuPG or GPG) signature

  • PGP (not GnuPG or GPG) keyserver

Understanding this should protect you from an inevitable pedantic “actually” from other PGP users you come across.

Installing GnuPG

If you are using Linux, you should already have GnuPG installed. On a Mac, you should install GPG-Suite or you can use brew install gnupg2. On a Windows PC, you should install GPG4Win, and you will probably need to adjust some of the commands in the guide to work for you, unless you have a unix-like environment set up. For all other platforms, you’ll need to do your own research to find the correct places to download and install GnuPG.

GnuPG 1 vs. 2

Both GnuPG v.1 and GnuPG v.2 implement the same standard, but they provide incompatible libraries and command-line tools, so many distributions ship both the legacy version 1 and the latest version 2. You need to make sure you are always using GnuPG v.2.

First, run:

$ gpg --version | head -n1

If you see gpg (GnuPG) 1.4.x, then you are using GnuPG v.1. Try the gpg2 command:

$ gpg2 --version | head -n1

If you see gpg (GnuPG) 2.x.x, then you are good to go. This guide will assume you have the version 2.2 of GnuPG (or later). If you are using version 2.0 of GnuPG, some of the commands in this guide will not work, and you should consider installing the latest 2.2 version of GnuPG.

Making sure you always use GnuPG v.2

If you have both gpg and gpg2 commands, you should make sure you are always using GnuPG v2, not the legacy version. You can make sure of this by setting the alias:

$ alias gpg=gpg2

You can put that in your .bashrc to make sure it’s always loaded whenever you use the gpg commands. 

In part 2 of this series, we will explain the basic steps for generating and protecting your master PGP key. 

Read more:

Part 1: Basic Concepts and Tools

Part 2: Generating Your Master Key

Part 3: Generating PGP Subkeys

Part 4: Moving Your Master Key to Offline Storage

Part 5: Moving Subkeys to a Hardware Device

Part 6: Using PGP with Git

Part 7: Protecting Online Accounts

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Linux 5.0-rc8

This may be totally unnecessary, but we actually had more patches come in this last week than we had for rc7, which just didn’t make me feel the warm and fuzzies. And while none of the patches looked all that scary, some of them were to pretty core files, so it wasn’t all just random rare drivers (although those kinds also existed).

So I agonized about it a bit, and then decided to just say “no hurry” and make an rc8. And after I had tagged the rc, I noticed a patch in my inbox that I had missed that was a regression from one of the very patches this last week, so that made me feel like rc8 was the right decision.

Read more at the Linux Kernel Mailing List

Core Technologies and Tools for AI, Big Data, and Cloud Computing

In this post, I’ll describe some of the core technologies and tools companies are beginning to evaluate and build. Many companies are just beginning to address the interplay between their suite of AI, big data, and cloud technologies. I’ll also highlight some interesting uses cases and applications of data, analytics, and machine learning. The resource examples I’ll cite will be drawn from the upcoming Strata Data conference in San Francisco, where leading companies and speakers will share their learnings on the topics covered in this post.

AI and machine learning in the enterprise

When asked what holds back the adoption of machine learning and AI, survey respondents for our upcoming report, “Evolving Data Infrastructure,” cited “company culture” and “difficulties in identifying appropriate business use cases” among the leading reasons. Attendees of the Strata Business Summit will have the opportunity to explore these issues through training sessions, tutorials, briefings, and real-world case studies from practitioners and companies. Recent improvements in tools and technologies has meant that techniques like deep learning are now being used to solve common problems, including forecasting, text mining and language understanding, and personalization. We’ve assembled sessions from leading companies, many of which will share case studies of applications of machine learning methods, including multiple presentations involving deep learning:

Read more at O’Reilly

Happy Little Accidents – Debugging JavaScript

Last year I gave a talk in HelsinkiJS and Turku ❤️ Frontend meetups titled Happy Little Accidents – The Art of Debugging (slides).

This week I was spending a lot of time debugging weird timezone issues and the talk popped back up from my memories. So I wanted to write a more detailed and Javascript focused post about the different options.

Print to console

All of the examples below are ones you can copy-paste to your developer console and starting playing around with.

console.log

One of the most underrated but definitely powerful tool is console.log and its friends. It’s also usually the first and easiest step in inspecting what might be the issue. …

Debugger

Javascript’s debugger keyword is a magical creature. It gives you access to the very spot with full access to local and global scope. Let’s take a look at a hypothetical example with a React Component that gets some props passed down to it.

Read more at Dev.to

Job Hunt Etiquette: New Best Practices for 2019

Although a lot has changed about the job interview process over the years, basic interview etiquette rules still apply. Be polite. Don’t lie about your experience. Send a thank you note. Follow up with hiring managers to stay top of mind. Avoid wearing a Darth Vader costume to your interview. (The last one should go without saying, but based on CareerBuilder’s annual survey on interview mistakes, at least one person could have used this tip before their interview.)

Classic advice like this holds true today, but in the digital era, there are nuances that job seekers should keep in mind. For instance, candidates no longer have to snail mail their thank you letters – email is instantaneous. But how soon is too soon – the next day? When they are leaving the building? Is texting OK?

We tapped experts to answer these and other pressing questions about job hunting rules and follow-up etiquette for 2019. Here’s their advice and updated best practices for job seekers.

Read more at Enterprisers Project

5 Linux GUI Cloud Backup Tools

We have reached a point in time where most every computer user depends upon the cloud … even if only as a storage solution. What makes the cloud really important to users, is when it’s employed as a backup. Why is that such a game changer? By backing up to the cloud, you have access to those files, from any computer you have associated with your cloud account. And because Linux powers the cloud, many services offer Linux tools.

Let’s take a look at five such tools. I will focus on GUI tools, because they offer a much lower barrier to entry to many of the CLI tools. I’ll also be focusing on various, consumer-grade cloud services (e.g., Google Drive, Dropbox, Wasabi, and pCloud). And, I will be demonstrating on the Elementary OS platform, but all of the tools listed will function on most Linux desktop distributions.

Note: Of the following backup solutions, only Duplicati is licensed as open source. With that said, let’s see what’s available.

Insync

I must confess, Insync has been my cloud backup of choice for a very long time. Since Google refuses to release a Linux desktop client for Google Drive (and I depend upon Google Drive daily), I had to turn to a third-party solution. Said solution is Insync. This particular take on syncing the desktop to Drive has not only been seamless, but faultless since I began using the tool.

The cost of Insync is a one-time $29.99 fee (per Google account). Trust me when I say this tool is worth the price of entry. With Insync you not only get an easy-to-use GUI for managing your Google Drive backup and sync, you get a tool (Figure 1) that gives you complete control over what is backed up and how it is backed up. Not only that, but you can also install Nautilus integration (which also allows you to easy add folders outside of the configured Drive sync destination).

Figure 1: The Insync app window on Elementary OS.

You can download Insync for Ubuntu (or its derivatives), Linux Mint, Debian, and Fedora from the Insync download page. Once you’ve installed Insync (and associated it with your account), you can then install Nautilus integration with these steps (demonstrating on Elementary OS):

  1. Open a terminal window and issue the command sudo nano /etc/apt/sources.list.d/insync.list.

  2. Paste the following into the new file: deb http://apt.insynchq.com/ubuntu precise non-free contrib.

  3. Save and close the file.

  4. Update apt with the command sudo apt-get update.

  5. Install the necessary package with the command sudo apt-get install insync-nautilus.

Allow the installation to complete. Once finished, restart Nautilus with the command nautilus -q (or log out and back into the desktop). You should now see an Insync entry in the Nautilus right-click context menu (Figure 2).

Figure 2: Insync/Nautilus integration in action.

Dropbox

Although Dropbox drew the ire of many in the Linux community (by dropping support for all filesystems but unencrypted ext4), it still supports a great deal of Linux desktop deployments. In other words, if your distribution still uses the ext4 file system (and you do not opt to encrypt your full drive), you’re good to go.

The good news is the Dropbox Linux desktop client is quite good. The tool offers a system tray icon that allows you to easily interact with your cloud syncing. Dropbox also includes CLI tools and a Nautilus integration (by way of an additional addon found here).

The Linux Dropbox desktop sync tool works exactly as you’d expect. From the Dropbox system tray drop-down (Figure 3) you can open the Dropbox folder, launch the Dropbox website, view recently changed files, get more space, pause syncing, open the preferences window, find help, and quite Dropbox.

Figure 3: The Dropbox system tray drop-down on Elementary OS.

The Dropbox/Nautilus integration is an important component, as it makes quickly adding to your cloud backup seamless and fast. From the Nautilus file manager, locate and right-click the folder to bad added, and select Dropbox > Move to Dropbox (Figure 4).

Figure 4: Dropbox/Nautilus integration.

The only caveat to the Dropbox/Nautilus integration is that the only option is to move a folder to Dropbox. To some this might not be an option. The developers of this package would be wise to instead have the action create a link (instead of actually moving the folder).

Outside of that one issue, the Dropbox cloud sync/backup solution for Linux is a great route to go.

pCloud

pCloud might well be one of the finest cloud backup solutions you’ve never heard of. This take on cloud storage/backup includes features like:

  • Encryption (subscription service required for this feature);

  • Mobile apps for Android and iOS;

  • Linux, Mac, and Windows desktop clients;

  • Easy file/folder sharing;

  • Built-in audio/video players;

  • No file size limitation;

  • Sync any folder from the desktop;

  • Panel integration for most desktops; and

  • Automatic file manager integration.

pCloud offers both Linux desktop and CLI tools that function quite well. pCloud offers both a free plan (with 10GB of storage), a Premium Plan (with 500GB of storage for a one-time fee of $175.00), and a Premium Plus Plan (with 2TB of storage for a one-time fee of $350.00). Both non-free plans can also be paid on a yearly basis (instead of the one-time fee).

The pCloud desktop client is quite user-friendly. Once installed, you have access to your account information (Figure 5), the ability to create sync pairs, create shares, enable crypto (which requires an added subscription), and general settings.

Figure 5: The pCloud desktop client is incredibly easy to use.

The one caveat to pCloud is there’s no file manager integration for Linux. That’s overcome by the Sync folder in the pCloud client.

CloudBerry

The primary focus for CloudBerry is for Managed Service Providers. The business side of CloudBerry does have an associated cost (one that is probably well out of the price range for the average user looking for a simple cloud backup solution). However, for home usage, CloudBerry is free.

What makes CloudBerry different than the other tools is that it’s not a backup/storage solution in and of itself. Instead, CloudBerry serves as a link between your desktop and the likes of:

  • AWS

  • Microsoft Azure

  • Google Cloud

  • BackBlaze

  • OpenStack

  • Wasabi

  • Local storage

  • External drives

  • Network Attached Storage

  • Network Shares

  • And more

In other words, you use CloudBerry as the interface between the files/folders you want to share and the destination with which you want send them. This also means you must have an account with one of the many supported solutions.
Once you’ve installed CloudBerry, you create a new Backup plan for the target storage solution. For that configuration, you’ll need such information as:

  • Access Key

  • Secret Key

  • Bucket

What you’ll need for the configuration will depend on the account you’re connecting to (Figure 6).

Figure 6: Setting up a CloudBerry backup for Wasabi.

The one caveat to CloudBerry is that it does not integrate with any file manager, nor does it include a system tray icon for interaction with the service.

Duplicati

Duplicati is another option that allows you to sync your local directories with either locally attached drives, network attached storage, or a number of cloud services. The options supported include:

  • Local folders

  • Attached drives

  • FTP/SFTP

  • OpenStack

  • WebDAV

  • Amazon Cloud Drive

  • Amazon S3

  • Azure Blob

  • Box.com

  • Dropbox

  • Google Cloud Storage

  • Google Drive

  • Microsoft OneDrive

  • And many more

Once you install Duplicati (download the installer for Debian, Ubuntu, Fedora, or RedHat from the Duplicati downloads page), click on the entry in your desktop menu, which will open a web page to the tool (Figure 7), where you can configure the app settings, create a new backup, restore from a backup, and more.

Figure 7: Duplicati web page.

To create a backup, click Add backup and walk through the easy-to-use wizard (Figure 8). The backup service you choose will dictate what you need for a successful configuration.

Figure 8: Creating a new Duplicati backup for Google Drive.

For example, in order to create a backup to Google Drive, you’ll need an AuthID. For that, click the AuthID link in the Destination section of the setup, where you’ll be directed to select the Google Account to associate with the backup. Once you’ve allowed Duplicati access to the account, the AuthID will fill in and you’re ready to continue. Click Test connection and you’ll be asked to okay the creation of a new folder (if necessary). Click Next to complete the setup of the backup.

More Where That Came From

These five cloud backup tools aren’t the end of this particular rainbow. There are plenty more options where these came from (including CLI-only tools). But any of these backup clients will do a great job of serving your Linux desktop-to-cloud backup needs.

7 Key Considerations for Kubernetes in Production

In this post, we share seven fundamental capabilities large enterprises need to instrument around their Kubernetes investments in order to be able to effectively implement it and utilize it to drive their business.

Typically, when developers begin to experiment with Kubernetes, they end up deploying Kubernetes on a set of servers. This is only a proof of concept (POC) deployment, and what we see is that this basic deployment is not something you can take into production for long-standing applications, since it is missing critical components to ensure smooth operations of mission-critical Kubernetes-based apps. While deploying a local Kubernetes environment can be a simple procedure that’s completed within days, an enterprise-grade deployment is quite another challenge.

A complete Kubernetes infrastructure needs proper DNS, load balancing, Ingress and K8’s role-based access control (RBAC), alongside a slew of additional components that then makes the deployment process quite daunting for IT. Once Kubernetes is deployed comes the addition of monitoring and all the associated operations playbooks to fix problems as they occur — such as when running out of capacity, ensuring HA, backups, and more. Finally, the cycle repeats again, whenever there’s a new version of Kubernetes released by the community, and your production clusters need to be upgraded without risking any application downtime.

Read more at The New Stack

Basics of Object-Oriented Programming

In programming, an object is simply a ‘thing’. I know, I know…how can you define something as a ‘thing’. Well, let’s think about it – What do ‘things’ have? Attributes, right?  – A dog has four legs, a color, a name, an owner, and a breed. Though there are millions Dogs with countless names, owners, etc, the one thing that ties them all together are the very fact that every single one can be described as a Dog.

Although this may seem like a not-very informative explanation, these types of examples are what ultimately made me understand Object-oriented programing. The set of activities that an object can perform is an Object’s behavior. 

Let’s look at a common element in programming, a simple string. As you can see, after the string is defined, I’m able to call different ‘methods’ or functions on the string I created. Ruby has several built in methods on common objects(ie strings, integers, arrays, and hashes.

Read more at Dev.to

The URLephant in the Room

Check out this presentation by Emily Stark from the Usenix Enigma 2019 conference.

In a security professional’s ideal world, every web user would carefully inspect their browser’s URL bar on every page they visit, verifying that they are accessing the site they intend to be accessing. In reality, many users rarely notice the URL bar and don’t know how to interpret the URL to verify a website’s identity. An evil URL may even be carefully designed to be indistinguishable from a legitimate one, such that even an expert couldn’t tell the difference!

In this talk, I’ll discuss the URLephant in the room: the fact that the web security model rests on users noticing and understanding URLs as indicators of website identities, but they don’t actually work very well for that purpose. I’ll discuss how the Chrome usable security team measures whether an indicator of website identity is working, and when the security community should consider breaking some rules of usable security in search of better solutions. 

Watch the presentation at Usenix.

ST Spins Up Linux-Powered Cortex-A SoC

STMicroelectronics has announced a new Cortex-A SoC and Linux- and Android-driven processor. The STM32MP1 SoC intends to ease the transition for developers moving from its STM32 microprocessor unit (MCU) family to more complex embedded systems. Development boards based on the SoC will be available in April.

Aimed at industrial, consumer, smart home, health, and wellness applications, the STM32MP1 features dual, 650MHz Cortex-A7 cores running a new “mainlined, open-sourced” OpenSTLinux distro with Yocto Project and OpenEmbedded underpinnings. There’s also a 209MHz Cortex-M4 chip with an FPU, MPU, and DSP instructions. The Cortex-M4 is supported by an enhanced version of ST’s STM32Cube development tools that support the Cortex-A7 cores in addition to the M4 (see below).

Like most of NXP’s recent embedded SoCs, including the single- or -dual Cortex-A7 i.MX7 and its newer, Cortex-A53 i.MX8M and i.MX8M Mini, the STM32MP1 is a hybrid Cortex-A/M design intended in ST’s words to “perform fast processing and real-time tasks on a single chip.” Hybrid Cortex-A7/M4 SoCs are also available from Renesas, Marvell, and MediaTek, which has developed a custom-built MT3620 SoC as the hardware foundation for Microsoft’s Azure Sphere IoT framework.

As the market leader in Cortex-A MCUs, ST has made a larger leap from its comfort zone than these other semiconductor vendors. NXP is also a leading MCU vendor, but it’s been crafting Linux-powered Cortex-A SoCs since long before it changed it named from Freescale. The SM32MP1 launch continues a trend of MCU technology companies reaching out to the Linux community, such as Arm’s new Mbed Linux distro and Pelion IoT Platform, which orchestrates Cortex-M and Cortex-A devices under a single IoT platform.

Inside the STM32MP1

The STM32MP1 is equipped with 32KB instruction and data caches, as well as a 256KB L2 cache. ST also supplies an optional Vivante 3D GPU with support for OpenGL ES 2.0 and 24-bit parallel RGB displays at up to WXGA (1280×800) at 60fps.

The SoC supports a 2-lane MIPI-DSI interface running at 1Gbps and offers native support for Linux and application frameworks such as Android Qt and Crank Software’s Storyboard GUI. While the GPU is pretty run-of-the-mill for Cortex-A7 SoCs it’s a giant leap from the perspective of MCU developers trying to bring up modern HMI displays.

Three SoC models are available: one with 3D GPU, MIPI-DSI, and 2x CAN FD interfaces, as well as one with 2x CAN FD only and one without the GPU and CAN I/O.

The STM32MP1 is touted for its rolling 10-year longevity support and heterogeneous architecture, which lets developers halt the Cortex-A7 and run only on the Cortex-M4 to reduce power consumption by 25 percent. From this mode, “going to Standby further cuts power by 2.5k times — while still supporting the resumption of Linux execution in 1 to 3 seconds, depending on the application,” says ST. The SoC includes a PMIC and other power circuitry such as buck and boost converters.

For security, the SoC provides Arm TrustZone, cryptography, hash, secure boot, anti-tamper pins, and a real-time clock. RAM support includes 32/16-bit, 533MHz DDR3, DDR3L, LPDDR2, LPDDR3. Flash support includes SD, eMMC, NAND, and NOR.

Peripherals include Cortex-A7 linked GbE, 3x USB 2.0, I2C, and multiple UART and SPI links. Analog I/O connected to the Cortex-M4 include 2x 16-bit ADCs, 2x 12-bit DACs, 29x timers, 3x watchdogs, LDOs, and up to 176 GPIOs.

OpenSTLinux, STM32Cube, and starter kits

The new OpenSTLinux distribution “has already been reviewed and accepted by the “Linux community: Linux Foundation, Yocto project, and Linaro,” says ST. The Linux BSP includes mainline kernel, drivers, boot chain, and Linaro’s OP-TEE (Trusted Execution Environment) security stack. It also includes Wayland/Weston, Gstreamer, and ALSA libraries.

Three Linux software development packages are available: a quick Starter package with STM32CubeMP1 samples; a Dev package with a Yocto Project SDK that lets you add your own Linux code; and an OpenEmbedded based Distrib package that also lets you create your own OpenSTLinux-based Linux distro. ST has collaborated with Timesys on the Linux BSPs and with Witekio to port Android to STM32MP1. 

STM32 developers can “easily find their marks” by using the familiar STM32Cube toolset to control both the Cortex-M4 and Cortex-A7 chips. The toolset includes GCC-based STM32CubeProgrammer and STM32CubeMX IDEs, which “include the DRAM interface tuning tool for easy configuration of the DRAM sub-system,” says ST.

Finally, ST is supporting its chip with a four development boards: the entry level STM32MP157A-DK1 and STM32MP157C-DK2 and the higher end STM32MP157A-EV1 and STM32MP157C-EV1. All the boards offer GPIO connectors for the Raspberry Pi and Arduino Uno V3.

The DK1/DK2 boards are equipped with 4GB DDR3L, as well as USB Type-C, USB Type-A OTG, HDMI, and MIPI-DSI. You also get GbE and WiFi/Bluetooth, and a 4-inch, VGA capacitive touch panel, among other features.

The more advanced A-EV1 and C-EV1 boards support up to 8GB DDR3L, 32GB eMMCv5.0. a microSD slot, and SPI and NAND flash. They provide most of the features of the DK boards, as well as CAN, camera support, SAI, SPDIF, digital mics, analog audio, and much more. They also supply 4x USB host ports and a micro-USB port. A 5.5-inch 720×1280 touchscreen is available.