Calculating a running total/rolling sum in SQL is a useful skill to have.
It can often come in handy for reporting and even when developing applications. Sometimes your users might want to see a running total of the points they have gained or perhaps the money they have earned. Like many problems in SQL, there are multiple ways you can solve this problem.
You can use analytic functions, self joins or an aggregate table that tracks the running sum. Here are a few examples(Also, skip down to the bottom if you just want to watch these explanations in video form).
Using An Analytic Function
Using an analytic function is the easiest way to calculate a running total. An analytic function lets you partition data by specific field. For instance, in this case, we can break down the rolling sum by driver_id and month. This will give us the running total by customer and month. So every month will start again at 0.
Even if you’re already familiar with the printf command, if you got your information via “man printf” you may be missing a couple of useful features that are provided by bash’s built-in version of the standard printf(1) command.
If you didn’t know bash had its own version of printf, then you didn’t heed the note in the man page for the printf(1) command:
NOTE: your shell may have its own version of printf, which usually supersedes the version described here. Please refer to your shell’s documentation for details about the options it supports.
You did read the man page, didn’t you? I must confess, I’d used printf for quite a while before I realized bash had its own.
To find the documentation for the built-in version of printf, just search for “printf” in the bash man page.
In case you’re completely unfamiliar with the printf command, and similar functions in other languages, a couple quick examples should get you up to speed:
Modern software teams face no shortage of edge cases and variations across the service categories and tiers of their ever-evolving architectures. In the midst of leading a team through the day-to-day firefighting, it can be difficult to see the forest for the trees. But as managers, we know our teams face similar trials: defects and regressions, capacity problems, operational debt and dangerous workloads affect all of us.
And then there is the complexity of scale, something we know about first hand. The New Relic platform includes more than 300 unique services and petabytes of SSD storage that handle at least 40 million HTTP requests, write 1.5 billion new data points, and process trillions of events … every minute. The platform is maintained by more than 50 agile teams performing multiple production releases a week. To cope with serious scale like this, engineering teams must be nimble and fast moving. Their managers must also ensure that their teams adhere to reliability processes that support this kind of complexity and scale.
Customize your Linux installation and gain working knowledge of your system at the same time.
Most Linux users are content with a standard installation of their distribution of choice. However, many prefer a custom installation. They may simply prefer to do things their way without dozens of post-install tweaks. Others may want to know exactly what they are installing as a requirement for security. Still others may want a consistent installation for multiple machines or to learn more about their operating system step by step. Linux offers tools for all these purposes.
Admittedly, most of these tools are for major distributions. A survey of these tools shows that many are for time-tested distros like Debian or openSUSE. If you want a custom install of, say, KDE neon or Puppy Linux, you may not find a ready-made solution. But among the major distributions, you are like to find multiple solutions. Read on for some of the main options.
Roll Your Own Desktops
Traditionally, many distributions install with a default desktop. For instance, Fedora and Ubuntu default to Gnome, and Mageia to KDE Plasma. Users who prefer another desktop can choose from a wide range after installation, although often they should think twice, because such distros often install with a range of utilities designed for their default desktop.
One of the most popular tasks undertaken on Linux is development. With good reason: Businesses rely on Linux. Without Linux, technology simply wouldn’t meet the demands of today’s ever-evolving world. Because of that, developers are constantly working to improve the environments with which they work. One way to manage such improvements is to have the right platform to start with. Thankfully, this is Linux, so you always have a plethora of choices.
But sometimes, too many choices can be a problem in and of itself. Which distribution is right for your development needs? That, of course, depends on what you’re developing, but certain distributions that just make sense to use as a foundation for your task. I’ll highlight five distributions I consider the best for developers in 2019.
Let’s not mince words here. Although the Linux Mint faithful are an incredibly loyal group (with good reason, their distro of choice is fantastic), Ubuntu Linux gets the nod here. Why? Because, thanks to the likes of AWS, Ubuntu is one of the most deployed server operating systems. That means developing on a Ubuntu desktop distribution makes for a much easier translation to Ubuntu Server. And because Ubuntu makes it incredibly easy to develop for, work with, and deploy containers, it makes perfect sense that you’d want to work with this platform. Couple that with Ubuntu’s inclusion of Snap Packages, and Canonical’s operating system gets yet another boost in popularity.
But it’s not just about what you can do with Ubuntu, it’s how easily you can do it. For nearly every task, Ubuntu is an incredibly easy distribution to use. And because Ubuntu is so popular, chances are every tool and IDE you want to work with can be easily installed from the Ubuntu Software GUI (Figure 1).
Figure 1: Developer tools found in the Ubuntu Software tool.
If you’re looking for ease of use, simplicity of migration, and plenty of available tools, you cannot go wrong with Ubuntu as a development platform.
There’s a very specific reason why I add openSUSE to this list. Not only is it an outstanding desktop distribution, it’s also one of the best rolling releases you’ll find on the market. So if you’re wanting to develop with and release for the most recent software available, openSUSE Tumbleweed should be one of your top choices. If you want to leverage the latest releases of your favorite IDEs, if you always want to make sure you’re developing with the most recent libraries and toolkits, Tumbleweed is your platform.
But openSUSE doesn’t just offer a rolling release distribution. If you’d rather make use of a standard release platform, openSUSE Leap is what you want.
Of course, it’s not just about standard or rolling releases. The openSUSE platform also has a Kubernetes-specific release, called Kubic, which is based on Kubernetes atop openSUSE MicroOS. But even if you aren’t developing for Kubernetes, you’ll find plenty of software and tools to work with.
And openSUSE also offers the ability to select your desktop environment, or (should you chose) a generic desktop or server (Figure 2).
Figure 2: The openSUSE Tumbleweed installation in action.
Using Fedora as a development platform just makes sense. Why? The distribution itself seems geared toward developers. With a regular, six month release cycle, developers can be sure they won’t be working with out of date software for long. This can be important, when you need the most recent tools and libraries. And if you’re developing for enterprise-level businesses, Fedora makes for an ideal platform, as it is the upstream for Red Hat Enterprise Linux. What that means is the transition to RHEL should be painless. That’s important, especially if you hope to bring your project to a much larger market (one with deeper pockets than a desktop-centric target).
Fedora also offers one of the best GNOME experiences you’ll come across (Figure 3). This translates to a very stable and fast desktops.
Figure 3: The GNOME desktop on Fedora.
But if GNOME isn’t your jam, you can opt to install one of the Fedora spins (which includes KDE, XFCE, LXQT, Mate-Compiz, Cinnamon, LXDE, and SOAS).
I’d be remiss if I didn’t include System76’s platform, customized specifically for their hardware (although it does work fine on other hardware). Why would I include such a distribution, especially one that doesn’t really venture far away from the Ubuntu platform for which is is based? Primarily because this is the distribution you want if you plan on purchasing a desktop or laptop from System76. But why would you do that (especially given that Linux works on nearly all off-the-shelf hardware)? Because System76 sells outstanding hardware. With the release of their Thelio desktop, you have available one of the most powerful desktop computers on the market. If you’re developing seriously large applications (especially ones that lean heavily on very large databases or require a lot of processing power for compilation), why not go for the best? And since Pop!_OS is perfectly tuned for System76 hardware, this is a no-brainer. Since Pop!_OS is based on Ubuntu, you’ll have all the tools available to the base platform at your fingertips (Figure 4).
Figure 4: The Anjunta IDE running on Pop!_OS.
Pop!_OS also defaults to encrypted drives, so you can trust your work will be safe from prying eyes (should your hardware fall into the wrong hands).
For anyone that likes the idea of developing on Arch Linux, but doesn’t want to have to jump through all the hoops of installing and working with Arch Linux, there’s Manjaro. Manjaro makes it easy to have an Arch Linux-based distribution up and running (as easily as installing and using, say, Ubuntu).
But what makes Manjaro developer-friendly (besides enjoying that Arch-y goodness at the base) is how many different flavors you’ll find available for download. From the Manjaro download page, you can grab the following flavors:
GNOME
XFCE
KDE
OpenBox
Cinnamon
I3
Awesome
Budgie
Mate
Xfce Developer Preview
KDE Developer Preview
GNOME Developer Preview
Architect
Deepin
Of note are the developer editions (which are geared toward testers and developers), the Architect edition (which is for users who want to build Manjaro from the ground up), and the Awesome edition (Figure 5 – which is for developers dealing with everyday tasks). The one caveat to using Manjaro is that, like any rolling release, the code you develop today may not work tomorrow. Because of this, you need to think with a certain level of agility. Of course, if you’re not developing for Manjaro (or Arch), and you’re doing more generic (or web) development, that will only affect you if the tools you use are updated and no longer work for you. Chances of that happening, however, are slim. And like with most Linux distributions, you’ll find a ton of developer tools available for Manjaro.
Figure 5: The Manjaro Awesome Edition is great for developers.
Manjaro also supports the Arch User Repository (a community-driven repository for Arch users), which includes cutting edge software and libraries, as well as proprietary applications like Unity Editor or yEd. A word of warning, however, about the Arch User Repository: It was discovered that the AUR contained software considered to be malicious. So, if you opt to work with that repository, do so carefully and at your own risk.
Any Linux Will Do
Truth be told, if you’re a developer, just about any Linux distribution will work. This is especially true if you do most of your development from the command line. But if you prefer a good GUI running on top of a reliable desktop, give one of these distributions a try, they will not disappoint.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
It is important to understand some basic concepts of AI, ML and Deep Learning to get a better sense of What they do and How they can be useful.
Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like human.
Machine learning (ML) is an approach to achieve Artificial Intelligence. ML approach provides computers with the ability to learn without being explicitly programmed. At its most basic is the practice of using algorithms to parse data, learn from it, and then make prediction about something in the world.
Deep learning (DL) is a technique for implementing Machine Learning. DL is the application of artificial neural networks to learning tasks that contain more than one hidden layer.
Artificial neural networks are computing systems inspired by the biological neural networks that constitute animal brains. An artificial neural network is an interconnected group of nodes, akin to the vast network of neurons in a brain. Here, each circular node represents an artificial neuron and an arrow represents a connection from the output of one neuron to the input of another.
Artificial intelligence (AI) is not sci-fi anymore; machines have made their way into our lives with ever-increasing importance. Today, humans are teaching machines and machines already affect the way we live, make choices, and get entertained.
There are many ways we already use AI in our everyday lives:
* We ask our devices to perform simple searching tasks, play music, or send messages without touching them.
* We are overwhelmed with sometimes creepy suggestions of things we “want” to buy or lists of movies we will enjoy watching according to some smart algorithms.
* We’re already used to the idea of self-driving cars.
* And we can’t ignore the convenience of the new auto-fill and follow-up Gmail features.
Machine Learning on Code
As AI technology matures and the number of use cases grows, you would think that developers would already be using machine learning to automate some aspects of the software development lifecycle. However, Machine Learning on Code is actually a field of research that is just starting to materialize into enterprise products. One of the pioneers of movement is a company calledsource{d}, which is building a series of open source projects turning code into actionable data and training machine learning models to help developers respect technical guidelines.
With every company quickly becoming a software company, intangible assets such as code represent a larger share of their market value. Therefore companies should strive to understand their codebase through meaningful analytic reports to inform engineering decisions and develop a competitive advantage for the business.
On one hand, managers can use tools like the open source source{d} engine to easily retrieve and analyze all their Git repositories via a friendly SQL API. They can run it from any Unix system, and it will automatically parse their companies’ source code in a language-agnostic way to identify trends and measure progress made on key digital transformation initiatives.
For example, as an engineering manager, you can track the evolution of your software portfolio. You can easily see what programming languages, open source or proprietary frameworks are becoming more popular as part of your development process. With that extra visibility, it becomes a whole lot easier to decide who to hire and develop a set of company-wide best practices
On the other hand, developers can save an incredible chunk of time by training bots to review their code as they submit pull requests (PRs). Once enabled across a large set of repositories, this could automate part of the code review process and enable developers to ship secure and qualitative code faster than ever before.
At the moment it checks for common mistakes, makes sure the style and format of each commits is consistent with the existing code base or highlights hotspots that might need closer attention. That’s huge already and clearly can benefit not only developers but companies as well. Imagine how much time and resources you could save from delegating your code review to a bot capable of working 24/7.
Assisted or automated code review is not the only Machine Learning on Code use case. In the coming years, machine learning will be used to automate quality assurance and testing, as well as bug prediction or hardware performance. For now, you can try source{d} Lookout and install it on your repository. It will listen for PRs, run analyzers and comment results directly on GitHub.
This article was produced in partnership with Holberton School.
Long ago, the Linux kernel started using 00-Index files to list the contents of each documentation directory. This was intended to explain what each of those files documented. Henrik Austad recently pointed out that those files have been out of date for a very long time and were probably not used by anyone anymore.
He posted a patch to rip them all unceremoniously out of the kernel.
Jonathan Corbet was more reserved. He felt Henrik should distribute the patch among a wider audience and see if it got any resistance. He added:
I’ve not yet decided whether I think this is a good idea or not. We certainly don’t need those files for stuff that’s in the RST doctree, that’s what the index.rst files are for. But I suspect some people might complain about losing them for the rest of the content. I do get patches from people updating them, so some folks do indeed look at them.
With open source software becoming more and more a requirement for job searchers, job seekers are finding that it is critical to becoming a part of this community. Many would rather see a GitHub profile than a CV.
But currently, only about 10 percent of open source contributors are women.
For this episode of The New Stack Makers podcast, Dr. Anita Sarma, associate professor of computer science in the Department of Electrical Engineering and Computer Science at Oregon State University, joins us to talk about her recent research on how to increase gender inclusivity in open source.
Her recent research focuses on problem-solving facets in which men and women differ statistically.In her research, she has focused on five ways in which men and women statistically differ in how they problem solve.
Do you want to do machine learning using R, but you’re having trouble getting started?
In this post you will complete your first machine learning project using R.
In this step-by-step tutorial you will:
Download and install R and get the most useful package for machine learning in R.
Load a dataset and understand it’s structure using statistical summaries and data visualization.
Create 5 machine learning models, pick the best and build confidence that the accuracy is reliable.
If you are a machine learning beginner and looking to finally get started using R, this tutorial was designed for you.
Let’s get started!
The best way to learn machine learning is by designing and completing small projects.
R Can Be Intimidating When Getting Started
R provides a scripting language with an odd syntax. There are also hundreds of packages and thousands of functions to choose from, providing multiple ways to do each task. It can feel overwhelming.
The best way to get started using R for machine learning is to complete a project.
It will force you to install and start R (at the very least).
It will given you a bird’s eye view of how to step through a small project.
It will give you confidence, maybe to go on to your own small projects.