A friend recently said to me, “We can’t do DevOps, we use a SQL database.” I nearly fell off my chair. Such a statement is wrong on many levels.
“But you don’t understand our situation!” he rebuffed. “DevOps means we’ll be deploying new releases of our software more frequently! We can barely handle deployments now and we only do it a few times a year!”
I asked him about his current deployment process. …
Let me start by clearing up a number of misconceptions. Then let’s talk about some techniques for making those deployments much, much easier.
First, DevOps is not a technology, it is a methodology.
DevOps doesn’t require or forbid any particular database technology—or any technology, for that matter. Saying you can or cannot “do DevOps” because you use a particular technology is like saying you can’t apply agile to a project that uses a particular language. SQL may be a common “excuse of the month,” but it is a weak excuse.
I understand how DevOps and the lack of SQL databases could become inexorably linked in some people’s minds. In the 2000s and early 2010s companies that were inventing and popularizing DevOps were frequently big websites that were, by coincidence, also popularizing NoSQL (key/value store) databases. Linking the two, however, is confusing correlation with causation. Those same companies were also popularizing providing gourmet lunches to employees at no charge. We can all agree that is not a prerequisite for DevOps.
Mainframes are, and will continue to be, a bedrock for industries and organizations that run mission-critical applications. In one way or another, all of us are mainframe users. Every time you make an online transaction or make a reservation, for example, you are using a mainframe.
According to IBM, corporations use mainframes for applications that depend on scalability and reliability. They rely on mainframes in order to:
Perform large-scale transaction processing (thousands of transactions per second)
Support thousands of users and application programs concurrently accessing numerous resources
Manage terabytes of information in databases
Handle large-bandwidth communication
Often when people hear the word mainframe, though, they think of dinosaurs. It’s true mainframes have aged, and one challenge the mainframe community faces is that they struggle to attract fresh developers who want to use latest and shiniest technologies.
Zowe milestones
Zowe, a Linux Foundation project under the umbrella of Open Mainframe Project is changing all that. Through this project, industry heavyweights including IBM, Rocket Software, and Broadcom came together to modernize mainframes running z/OS.
Let’s start with an uncontroversial point: Software developers and system operators love Kubernetes as a way to deploy and manage applications in Linux containers. Linux containers provide the foundation for reproducible builds and deployments, but Kubernetes and its ecosystem provide essential features that make containers great for running real applications, like:
Continuous integration and deployment, so you can go from a Git commit to a passing test suite to new code running in production
Ubiquitous monitoring, which makes it easy to track the performance and other metrics about any component of a system and visualize them in meaningful ways
Declarative deployments, which allow you to rely on Kubernetes to recreate your production environment in a staging environment
Flexible service routing, which means you can scale services out or gradually roll updates out to production (and roll them back if necessary)
I’ve played with Linux on several of my own machines, but I recently unboxed my first custom-built Linux PC courtesy of Tuxedo Computers. It’s called the InfinityCube v9, and it’s left me very impressed. In fact I’ve been leaning on it more than the beefy AMD Ryzen 1950X rig I built because it’s silent and super stable. Tuxedo Computers just launched the InfinityCube on their web shop, so let’s take a quick look at this new desktop along with some initial benchmarks… Read more at Forbes
The telecom industry is at the heart of the fourth industrial revolution. Whether it’s connected IoT devices or mobile entertainment, the modern economy runs on the Internet.
However, the backbone of networking has been running on legacy technologies. Some telecom companies are centuries old, and they have a massive infrastructure that needs to be modernized.
The great news is that this industry is already at the forefront of emerging technologies. Companies such as AT&T, Verizon, China Mobile, DTK, and others have embraced open source technologies to move faster into the future. And LF Networking is at the heart of this transformation.
“2018 has been a fantastic year,” said Arpit Joshipura, General Manager of Networking at Linux Foundation, speaking at Open Source Summit in Vancouver last fall. “We have seen a 140-year-old telecom industry move from proprietary and legacy technologies to open source technologies with LF Networking.”
The Linux cat and zcat commands are more useful than you may realize.
Cat is a fairly simple tool designed to concatenate and write file(s) to your screen, which is known as standard output (stdout). It is part of the GNU Core Utils released under the GPLv3+ license. You can expect to find it in just about any Linux distribution or other Unix operating environment, such as FreeBSD or Solaris. The simplest use of cat is to show the contents of a file. Here is an example with a file named hello.world:
$ ls
hello.world
$ cat hello.world
Hello World!
$
The most common way I use the cat command is for viewing configuration files, such as those in the /etcdirectory.
I LOVE computer networking (it’s what I spent a big chunk of the last few years at work doing), but getting started with all the tools was originally a little tricky! For example – what if you have the IP address of a server and you want to make a https connection to it and check that it has a valid certificate? But you haven’t changed DNS to resolve to that server yet (because you don’t know if it works!) so you need to use the IP address? If you do curl https://1.2.3.4/, curl will tell you that the certificate isn’t valid (because it’s not valid for 1.2.3.4). So you need to know to do curl https://jvns.ca --resolve jvns.ca:443:104.198.14.52.
I know how to use curl --resolve because my coworker told me how. And I learned that to find out when a cert expires you can do openssl x509 -in YOURCERT.pem -text -noout the same way. So the goal with this zine is basically to be “your very helpful coworker who gives you tips about how to use networking tools” in case you don’t have that person.
Learn how to use Cockpit for Linux administration tasks in this tutorial from our archives.
If you administer a Linux server, you’ve probably been in search of a solid administration tool. That quest has probably taken you to such software as Webmin and cPanel. But if you’re looking for an easy way to manage a Linux server that also includes Docker, one tool stands above the rest for that particular purpose: Cockpit.
Why Cockpit? Because it includes the ability to handle administrative tasks such as:
Connect and manage multiple machines
Manage containers via Docker
Interact with a Kubernetes or Openshift clusters
Modify network settings
Manage user accounts
Access a web-based shell
View system performance information by way of helpful graphs
View system services and log files
Cockpit can be installed on Debian, Red Hat, CentOS, Arch Linux, and Ubuntu. Here, I will focus on installing the system on a Ubuntu 16.04 server that already includes Docker.
Out of the list of features, the one that stands out is the container management. Why? Because it make installing and managing containers incredibly simple. In fact, you might be hard-pressed to find a better container management solution. With that said, let’s install this solution and see just how easy it is to use.
Installation
As I mentioned earlier, I will be installing Cockpit on an instance of Ubuntu 16.04, with Docker already running. The steps for installation are quite simple. The first thing you must do is log into your Ubuntu server. Next you must add the necessary repository with the command:
When prompted, hit the Enter key on your keyboard and wait for the prompt to return. Once you are back at your bash prompt, update apt with the command:
sudo apt-get get update
Install Cockpit by issuing the command:
sudo apt-get -y install cockpit cockpit-docker
After the installation completes, it is necessary to start the Cockpit service and then enable it so it auto-starts at boot. To do this, issue the following two commands:
To gain access to the Cockpit web interface, point a browser (that happens to be on the same network as the Cockpit server) to http://IP_OF_SERVER:9090, and you will be presented with a login screen (Figure 1).
Figure 1: The Cockpit login screen.
A word of warning with using Cockpit and Ubuntu. Many of the tasks that can be undertaken with Cockpit require administrative access. If you log in with a standard user, you won’t be able to work with some of the tools like Docker. To get around that, you can enable the root user on Ubuntu. This isn’t always a good idea. By enabling the root account, you are bypassing the security system that has been in place for years. However, for the purpose of this article, I will enable the root user with the following two commands:
sudo passwd rootsudo passwd -u root
NOTE: Make sure you give the root account a very challenging password.
Should you want to revert this change, you only need issue the command:
sudo passwd -l root
With other distributions, such as CentOS and Red Hat, you will be able to log into Cockpit with the usernamerootand the root password, without having to go through the extra hopes as described above. If you’re hesitant to enable the root user, you can always pull down the images, from the server terminal (using the command docker pull IMAGE_NAME whereIMAGE_NAMEis the image you want to pull). That would add the image to your docker server, which can then be managed via a regular user. The only caveat to this is that the regular user must be added to the Docker group with the command:
sudo usermod -aG docker USER
Where USER is the actual username to be added to the group. Once you’ve done that, log out, log back in, and then restart Docker with the command:
sudo service docker restart
Now the regular user can start and stop the added Docker images/containers without having to enable the root user. The only caveat is that user will not be able to add new images via the Cockpit interface.
Using Cockpit
Once you’ve logged in, you will be treated to the Cockpit main window (Figure 2).
Figure 2: The Cockpit main window.
You can go through each of the sections to check on the status of the server, work with users, etc., but we want to go right to the containers. Click on the Containers section to display the current running contains as well as the available images (Figure 3).
Figure 3: Managing containers is incredibly simple with Cockpit.
To start an image, simply locate the image and click the associated start button. From the resulting popup window (Figure 4), you can check all the information about the image (and adjust as needed), before clicking the Run button.
Figure 4: Running a Docker image with the help of Cockpit.
Once the image is running, you can check its status by clicking on the entry under the Containers section and then Stop, Restart, or Delete the instance. You can also click Change resource limits and then adjust either the Memory limit and/or CPU priority.
Adding new images
Say you have logged on as the root user. If so, you can add new images with the help of the Cockpit GUI. From the Containers section, click the Get new image button and then, in the resulting window, search for the image you want to add. Say you want to add the latest official build of Centos. Type centos in the search field and then, once the search results populate, select the official listing and click Download (Figure 5).
Figure 5: Adding the latest build of the official Centos images to Docker, via Cockpit.
Once the image has downloaded, it will be available to Docker and can be run via Cockpit.
As simple as it gets
Managing Docker doesn’t get any easier. Yes, there is a caveat when working with Cockpit on Ubuntu, but if it’s your only option, there are ways to make it work. With the help of Cockpit, you can not only easily manage Docker images, you can do so from any web browser that has access to your Linux server. Enjoy your newfound Docker ease.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
Despite often being introduced early-on in most ventures into programming, the concept of recursion can seem strange and potentially off-putting upon first encountering it. It seems almost paradoxical: how can we find a solution to a problem using the solution to the same problem?
Believe it or not, once we get to grips with it, some problems are easier to solve using recursion than they are to solve using iteration. Sometimes recursion is more efficient, and sometimes it is more readable; sometimes recursion is neither faster nor more readable, but quicker to implement. There are data-structures, such as trees, that are well-suited to recursive algorithms. There are even some programming languages with no concept of a loop — purely functional languages such as Haskell depend entirely on recursion for iterative problem solving. The point is simple: You don’t have to understand recursion to be a programmer, but you do have to understand recursion to start to become a good programmer. In fact, I’d go as far as to say that understanding recursion is part of being a good problem solver, all programming aside!
The Essence of Recursion
In general, with recursion we try to break down a more complex problem into a simple step towards the solution and a remainder that is an easier version of the same problem. We can then repeat this process, taking the same step towards the solution each time, until we reach a version of our problem with a very simple solution (referred to as a base case). The simple solution to our base case aggregated with the steps we took to get there then form a solution to our original problem.
The original and most simple scenario of the Monty Hall problem is this: You are in a prize contest and in front of you there are three doors (A, B and C). Behind one of the doors is a prize (Car), while behind others is a loss (Goat). You first choose a door (let’s say door A). The contest host then opens another door behind which is a goat (let’s say door B), and then he ask you will you stay behind your original choice or will you switch the door. The question behind this is what is the better strategy?
The basis of the answer lies in related and unrelated events. The most common answer is that it doesn’t matter which strategy you choose because it is 50/50 chance – but it is not. The 50/50 assumption is based on the idea that the first choice (one of three doors) and the second choice (stay or switch door) are unrelated events, like flipping a coin two times. But in reality, those are related events, and the second event depends on the first event.
At the first step, when you choose one of three doors, the probability that you picked the right door is 33%, or in other words, there is 66,67% that you are on the wrong door. The fact that that in the second step you are given a choice between your door and the other one doesn’t change the fact that you are most likely starting with the wrong door. Therefore, it is better to switch door in the second step.