This may be totally unnecessary, but we actually had more patches come in this last week than we had for rc7, which just didn’t make me feel the warm and fuzzies. And while none of the patches looked all that scary, some of them were to pretty core files, so it wasn’t all just random rare drivers (although those kinds also existed).
So I agonized about it a bit, and then decided to just say “no hurry” and make an rc8. And after I had tagged the rc, I noticed a patch in my inbox that I had missed that was a regression from one of the very patches this last week, so that made me feel like rc8 was the right decision.
In this post, I’ll describe some of the core technologies and tools companies are beginning to evaluate and build. Many companies are just beginning to address the interplay between their suite of AI, big data, and cloud technologies. I’ll also highlight some interesting uses cases and applications of data, analytics, and machine learning. The resource examples I’ll cite will be drawn from the upcoming Strata Data conference in San Francisco, where leading companies and speakers will share their learnings on the topics covered in this post.
AI and machine learning in the enterprise
When asked what holds back the adoption of machine learning and AI, survey respondents for our upcoming report, “Evolving Data Infrastructure,” cited “company culture” and “difficulties in identifying appropriate business use cases” among the leading reasons. Attendees of the Strata Business Summit will have the opportunity to explore these issues through training sessions, tutorials, briefings, and real-world case studies from practitioners and companies. Recent improvements in tools and technologies has meant that techniques like deep learning are now being used to solve common problems, including forecasting, text mining and language understanding, and personalization. We’ve assembled sessions from leading companies, many of which will share case studies of applications of machine learning methods, including multiple presentations involving deep learning:
Last year I gave a talk in HelsinkiJS and Turku ❤️ Frontend meetups titled Happy Little Accidents – The Art of Debugging (slides).
This week I was spending a lot of time debugging weird timezone issues and the talk popped back up from my memories. So I wanted to write a more detailed and Javascript focused post about the different options.
Print to console
All of the examples below are ones you can copy-paste to your developer console and starting playing around with.
console.log
One of the most underrated but definitely powerful tool is console.log and its friends. It’s also usually the first and easiest step in inspecting what might be the issue. …
Debugger
Javascript’s debugger keyword is a magical creature. It gives you access to the very spot with full access to local and global scope. Let’s take a look at a hypothetical example with a React Component that gets some props passed down to it.
Although a lot has changed about the job interview process over the years, basic interview etiquette rules still apply. Be polite. Don’t lie about your experience. Send a thank you note. Follow up with hiring managers to stay top of mind. Avoid wearing a Darth Vader costume to your interview. (The last one should go without saying, but based on CareerBuilder’s annual survey on interview mistakes, at least one person could have used this tip before their interview.)
Classic advice like this holds true today, but in the digital era, there are nuances that job seekers should keep in mind. For instance, candidates no longer have to snail mail their thank you letters – email is instantaneous. But how soon is too soon – the next day? When they are leaving the building? Is texting OK?
We tapped experts to answer these and other pressing questions about job hunting rules and follow-up etiquette for 2019. Here’s their advice and updated best practices for job seekers.
We have reached a point in time where most every computer user depends upon the cloud … even if only as a storage solution. What makes the cloud really important to users, is when it’s employed as a backup. Why is that such a game changer? By backing up to the cloud, you have access to those files, from any computer you have associated with your cloud account. And because Linux powers the cloud, many services offer Linux tools.
Let’s take a look at five such tools. I will focus on GUI tools, because they offer a much lower barrier to entry to many of the CLI tools. I’ll also be focusing on various, consumer-grade cloud services (e.g., Google Drive, Dropbox, Wasabi, and pCloud). And, I will be demonstrating on the Elementary OS platform, but all of the tools listed will function on most Linux desktop distributions.
Note: Of the following backup solutions, only Duplicati is licensed as open source. With that said, let’s see what’s available.
Insync
I must confess, Insync has been my cloud backup of choice for a very long time. Since Google refuses to release a Linux desktop client for Google Drive (and I depend upon Google Drive daily), I had to turn to a third-party solution. Said solution is Insync. This particular take on syncing the desktop to Drive has not only been seamless, but faultless since I began using the tool.
The cost of Insync is a one-time $29.99 fee (per Google account). Trust me when I say this tool is worth the price of entry. With Insync you not only get an easy-to-use GUI for managing your Google Drive backup and sync, you get a tool (Figure 1) that gives you complete control over what is backed up and how it is backed up. Not only that, but you can also install Nautilus integration (which also allows you to easy add folders outside of the configured Drive sync destination).
You can download Insync for Ubuntu (or its derivatives), Linux Mint, Debian, and Fedora from the Insync download page. Once you’ve installed Insync (and associated it with your account), you can then install Nautilus integration with these steps (demonstrating on Elementary OS):
Open a terminal window and issue the command sudo nano /etc/apt/sources.list.d/insync.list.
Paste the following into the new file: deb http://apt.insynchq.com/ubuntu precise non-free contrib.
Save and close the file.
Update apt with the command sudo apt-get update.
Install the necessary package with the command sudo apt-get install insync-nautilus.
Allow the installation to complete. Once finished, restart Nautilus with the commandnautilus -q(or log out and back into the desktop). You should now see an Insync entry in the Nautilus right-click context menu (Figure 2).
Dropbox
Although Dropbox drew the ire of many in the Linux community (by dropping support for all filesystems but unencrypted ext4), it still supports a great deal of Linux desktop deployments. In other words, if your distribution still uses the ext4 file system (and you do not opt to encrypt your full drive), you’re good to go.
The good news is the Dropbox Linux desktop client is quite good. The tool offers a system tray icon that allows you to easily interact with your cloud syncing. Dropbox also includes CLI tools and a Nautilus integration (by way of an additional addon found here).
The Linux Dropbox desktop sync tool works exactly as you’d expect. From the Dropbox system tray drop-down (Figure 3) you can open the Dropbox folder, launch the Dropbox website, view recently changed files, get more space, pause syncing, open the preferences window, find help, and quite Dropbox.
The Dropbox/Nautilus integration is an important component, as it makes quickly adding to your cloud backup seamless and fast. From the Nautilus file manager, locate and right-click the folder to bad added, and select Dropbox > Move to Dropbox (Figure 4).
The only caveat to the Dropbox/Nautilus integration is that the only option is to move a folder to Dropbox. To some this might not be an option. The developers of this package would be wise to instead have the action create a link (instead of actually moving the folder).
Outside of that one issue, the Dropbox cloud sync/backup solution for Linux is a great route to go.
pCloud
pCloud might well be one of the finest cloud backup solutions you’ve never heard of. This take on cloud storage/backup includes features like:
Encryption (subscription service required for this feature);
Mobile apps for Android and iOS;
Linux, Mac, and Windows desktop clients;
Easy file/folder sharing;
Built-in audio/video players;
No file size limitation;
Sync any folder from the desktop;
Panel integration for most desktops; and
Automatic file manager integration.
pCloud offers both Linux desktop and CLI tools that function quite well. pCloud offers both a free plan (with 10GB of storage), a Premium Plan (with 500GB of storage for a one-time fee of $175.00), and a Premium Plus Plan (with 2TB of storage for a one-time fee of $350.00). Both non-free plans can also be paid on a yearly basis (instead of the one-time fee).
The pCloud desktop client is quite user-friendly. Once installed, you have access to your account information (Figure 5), the ability to create sync pairs, create shares, enable crypto (which requires an added subscription), and general settings.
The one caveat to pCloud is there’s no file manager integration for Linux. That’s overcome by the Sync folder in the pCloud client.
CloudBerry
The primary focus for CloudBerry is for Managed Service Providers. The business side of CloudBerry does have an associated cost (one that is probably well out of the price range for the average user looking for a simple cloud backup solution). However, for home usage, CloudBerry is free.
What makes CloudBerry different than the other tools is that it’s not a backup/storage solution in and of itself. Instead, CloudBerry serves as a link between your desktop and the likes of:
AWS
Microsoft Azure
Google Cloud
BackBlaze
OpenStack
Wasabi
Local storage
External drives
Network Attached Storage
Network Shares
And more
In other words, you use CloudBerry as the interface between the files/folders you want to share and the destination with which you want send them. This also means you must have an account with one of the many supported solutions. Once you’ve installed CloudBerry, you create a new Backup plan for the target storage solution. For that configuration, you’ll need such information as:
Access Key
Secret Key
Bucket
What you’ll need for the configuration will depend on the account you’re connecting to (Figure 6).
The one caveat to CloudBerry is that it does not integrate with any file manager, nor does it include a system tray icon for interaction with the service.
Duplicati
Duplicati is another option that allows you to sync your local directories with either locally attached drives, network attached storage, or a number of cloud services. The options supported include:
Local folders
Attached drives
FTP/SFTP
OpenStack
WebDAV
Amazon Cloud Drive
Amazon S3
Azure Blob
Box.com
Dropbox
Google Cloud Storage
Google Drive
Microsoft OneDrive
And many more
Once you install Duplicati (download the installer for Debian, Ubuntu, Fedora, or RedHat from the Duplicati downloads page), click on the entry in your desktop menu, which will open a web page to the tool (Figure 7), where you can configure the app settings, create a new backup, restore from a backup, and more.
To create a backup, click Add backup and walk through the easy-to-use wizard (Figure 8). The backup service you choose will dictate what you need for a successful configuration.
For example, in order to create a backup to Google Drive, you’ll need an AuthID. For that, click the AuthID link in the Destination section of the setup, where you’ll be directed to select the Google Account to associate with the backup. Once you’ve allowed Duplicati access to the account, the AuthID will fill in and you’re ready to continue. Click Test connection and you’ll be asked to okay the creation of a new folder (if necessary). Click Next to complete the setup of the backup.
More Where That Came From
These five cloud backup tools aren’t the end of this particular rainbow. There are plenty more options where these came from (including CLI-only tools). But any of these backup clients will do a great job of serving your Linux desktop-to-cloud backup needs.
In this post, we share seven fundamental capabilities large enterprises need to instrument around their Kubernetes investments in order to be able to effectively implement it and utilize it to drive their business.
Typically, when developers begin to experiment with Kubernetes, they end up deploying Kubernetes on a set of servers. This is only a proof of concept (POC) deployment, and what we see is that this basic deployment is not something you can take into production for long-standing applications, since it is missing critical components to ensure smooth operations of mission-critical Kubernetes-based apps. While deploying a local Kubernetes environment can be a simple procedure that’s completed within days, an enterprise-grade deployment is quite another challenge.
A complete Kubernetes infrastructure needs proper DNS, load balancing, Ingress and K8’s role-based access control (RBAC), alongside a slew of additional components that then makes the deployment process quite daunting for IT. Once Kubernetes is deployed comes the addition of monitoring and all the associated operations playbooks to fix problems as they occur — such as when running out of capacity, ensuring HA, backups, and more. Finally, the cycle repeats again, whenever there’s a new version of Kubernetes released by the community, and your production clusters need to be upgraded without risking any application downtime.
In programming, an object is simply a ‘thing’. I know, I know…how can you define something as a ‘thing’. Well, let’s think about it – What do ‘things’ have? Attributes, right? – A dog has four legs, a color, a name, an owner, and a breed. Though there are millions Dogs with countless names, owners, etc, the one thing that ties them all together are the very fact that every single one can be described as a Dog.
Although this may seem like a not-very informative explanation, these types of examples are what ultimately made me understand Object-oriented programing. The set of activities that an object can perform is an Object’s behavior.
Let’s look at a common element in programming, a simple string. As you can see, after the string is defined, I’m able to call different ‘methods’ or functions on the string I created. Ruby has several built in methods on common objects(ie strings, integers, arrays, and hashes.
Check out this presentation by Emily Stark from the Usenix Enigma 2019 conference.
In a security professional’s ideal world, every web user would carefully inspect their browser’s URL bar on every page they visit, verifying that they are accessing the site they intend to be accessing. In reality, many users rarely notice the URL bar and don’t know how to interpret the URL to verify a website’s identity. An evil URL may even be carefully designed to be indistinguishable from a legitimate one, such that even an expert couldn’t tell the difference!
In this talk, I’ll discuss the URLephant in the room: the fact that the web security model rests on users noticing and understanding URLs as indicators of website identities, but they don’t actually work very well for that purpose. I’ll discuss how the Chrome usable security team measures whether an indicator of website identity is working, and when the security community should consider breaking some rules of usable security in search of better solutions.
STMicroelectronics has announced a new Cortex-A SoC and Linux- and Android-driven processor. The STM32MP1 SoC intends to ease the transition for developers moving from its STM32 microprocessor unit (MCU) family to more complex embedded systems. Development boards based on the SoC will be available in April.
Aimed at industrial, consumer, smart home, health, and wellness applications, the STM32MP1 features dual, 650MHz Cortex-A7 cores running a new “mainlined, open-sourced” OpenSTLinux distro with Yocto Project and OpenEmbedded underpinnings. There’s also a 209MHz Cortex-M4 chip with an FPU, MPU, and DSP instructions. The Cortex-M4 is supported by an enhanced version of ST’s STM32Cube development tools that support the Cortex-A7 cores in addition to the M4 (see below).
Like most of NXP’s recent embedded SoCs, including the single- or -dual Cortex-A7 i.MX7 and its newer, Cortex-A53 i.MX8M and i.MX8M Mini, the STM32MP1 is a hybrid Cortex-A/M design intended in ST’s words to “perform fast processing and real-time tasks on a single chip.” Hybrid Cortex-A7/M4 SoCs are also available from Renesas, Marvell, and MediaTek, which has developed a custom-built MT3620 SoC as the hardware foundation for Microsoft’s Azure Sphere IoT framework.
As the market leader in Cortex-A MCUs, ST has made a larger leap from its comfort zone than these other semiconductor vendors. NXP is also a leading MCU vendor, but it’s been crafting Linux-powered Cortex-A SoCs since long before it changed it named from Freescale. The SM32MP1 launch continues a trend of MCU technology companies reaching out to the Linux community, such as Arm’s new Mbed Linux distro and Pelion IoT Platform, which orchestrates Cortex-M and Cortex-A devices under a single IoT platform.
Inside the STM32MP1
The STM32MP1 is equipped with 32KB instruction and data caches, as well as a 256KB L2 cache. ST also supplies an optional Vivante 3D GPU with support for OpenGL ES 2.0 and 24-bit parallel RGB displays at up to WXGA (1280×800) at 60fps.
The SoC supports a 2-lane MIPI-DSI interface running at 1Gbps and offers native support for Linux and application frameworks such as Android Qt and Crank Software’s Storyboard GUI. While the GPU is pretty run-of-the-mill for Cortex-A7 SoCs it’s a giant leap from the perspective of MCU developers trying to bring up modern HMI displays.
Three SoC models are available: one with 3D GPU, MIPI-DSI, and 2x CAN FD interfaces, as well as one with 2x CAN FD only and one without the GPU and CAN I/O.
The STM32MP1 is touted for its rolling 10-year longevity support and heterogeneous architecture, which lets developers halt the Cortex-A7 and run only on the Cortex-M4 to reduce power consumption by 25 percent. From this mode, “going to Standby further cuts power by 2.5k times — while still supporting the resumption of Linux execution in 1 to 3 seconds, depending on the application,” says ST. The SoC includes a PMIC and other power circuitry such as buck and boost converters.
For security, the SoC provides Arm TrustZone, cryptography, hash, secure boot, anti-tamper pins, and a real-time clock. RAM support includes 32/16-bit, 533MHz DDR3, DDR3L, LPDDR2, LPDDR3. Flash support includes SD, eMMC, NAND, and NOR.
Peripherals include Cortex-A7 linked GbE, 3x USB 2.0, I2C, and multiple UART and SPI links. Analog I/O connected to the Cortex-M4 include 2x 16-bit ADCs, 2x 12-bit DACs, 29x timers, 3x watchdogs, LDOs, and up to 176 GPIOs.
OpenSTLinux, STM32Cube, and starter kits
The new OpenSTLinux distribution “has already been reviewed and accepted by the “Linux community: Linux Foundation, Yocto project, and Linaro,” says ST. The Linux BSP includes mainline kernel, drivers, boot chain, and Linaro’s OP-TEE (Trusted Execution Environment) security stack. It also includes Wayland/Weston, Gstreamer, and ALSA libraries.
Three Linux software development packages are available: a quick Starter package with STM32CubeMP1 samples; a Dev package with a Yocto Project SDK that lets you add your own Linux code; and an OpenEmbedded based Distrib package that also lets you create your own OpenSTLinux-based Linux distro. ST has collaborated with Timesys on the Linux BSPs and with Witekio to port Android to STM32MP1.
STM32 developers can “easily find their marks” by using the familiar STM32Cube toolset to control both the Cortex-M4 and Cortex-A7 chips. The toolset includes GCC-based STM32CubeProgrammer and STM32CubeMX IDEs, which “include the DRAM interface tuning tool for easy configuration of the DRAM sub-system,” says ST.
Finally, ST is supporting its chip with a four development boards: the entry level STM32MP157A-DK1 and STM32MP157C-DK2 and the higher end STM32MP157A-EV1 and STM32MP157C-EV1. All the boards offer GPIO connectors for the Raspberry Pi and Arduino Uno V3.
The DK1/DK2 boards are equipped with 4GB DDR3L, as well as USB Type-C, USB Type-A OTG, HDMI, and MIPI-DSI. You also get GbE and WiFi/Bluetooth, and a 4-inch, VGA capacitive touch panel, among other features.
The more advanced A-EV1 and C-EV1 boards support up to 8GB DDR3L, 32GB eMMCv5.0. a microSD slot, and SPI and NAND flash. They provide most of the features of the DK boards, as well as CAN, camera support, SAI, SPDIF, digital mics, analog audio, and much more. They also supply 4x USB host ports and a micro-USB port. A 5.5-inch 720×1280 touchscreen is available.
Linux Foundation training has announced a new course designed to provide network engineers with the skills necessary to start applying DevOps practices and leverage their expertise in a DevOps environment.
In the new DevOps for Network Engineers course, you’ll learn how to navigate your role in the CI/CD cycle, find common ground, and use key tools to contribute effectively in areas like connectivity, network performance tuning, security, and other aspects of network management within a DevOps environment.
Network automation is becoming the standard in data centers, with major implications for network engineers. This online, self-paced course will help you become familiar with the tools needed to integrate your skills into the DevOps/Agile process.
Course highlights include:
How to integrate into a DevOps/Agile environment
Commonly used DevOps tools
How DevOps teams collaborate on projects
How to confidently work with software and configuration files in version control
How to confidently apply Agile principles in an organization