On October 30, 2019, we officially open sourced Amundsen, our solution to solve metadata catalog and data discovery challenges. Ten months later, Amundsen joined the Linux foundation AI (LFAI) as its incubation project.
In almost every modern data-driven company, each interaction with the platform is powered by data. As data resources are constantly growing, it becomes increasingly difficult to understand what data resources exist, how to access them, and what information is available in those sources without tribal knowledge. Poor understanding of data leads to bad data quality, low productivity, duplication of work, and most importantly, a lack of trust in the data. The complexity of managing a fragmented data landscape is not just a problem unique to Lyft, but a common one that exists throughout the industry.
In a nutshell, Amundsen is a data discovery and metadata platform for improving the productivity of data analysts, data scientists, and engineers when interacting with data. By indexing the data resources (tables, dashboards, users, etc.) and powering a page-rank style search based on usage patterns (e.g. highly-queried tables show up earlier than less-queried tables), these customers are able to address their data needs faster.
This article covers how to migrate an NFS server from kernel space to userspace, which is based on Glusterfs and nfs-ganesha.
Click to Read More at Oracle Linux Kernel Development
This article covers how to migrate an NFS server from kernel space to userspace, which is based on Glusterfs and nfs-ganesha.
Click to Read More at Oracle Linux Kernel Development
Linux Foundation Networking (LFN) organized its first virtual event last week and we sat down with Arpit Joshipura, the General Manager of Networking, IoT and Edge at the Linux Foundation, to talk about the key points of the event and how LFN is leading the adoption of open source within the telco space.
Swapnil Bhartiya: Today, we have with us Arpit Joshipura, General Manager of Networking, IoT and Edge, at the Linux Foundation. Arpit, what were some of the highlights of this event? Some big announcements that you can talk about?
Arpit Joshipura: This was a global event with more than 80 sessions and was attended by attendees from over 75 countries. The sessions were very diverse. A lot of the sessions were end-user driven, operator driven as well as from our vendors and partners. If you take LF Networking and LFH as two umbrellas that are leading the Networking and Edge implementations here, we had a very significant announcement. I would probably group them into 5 main things:
Number one, we released a white paper at the Linux Foundation level where we had a bunch of vertical industries transformed using open source. These are over 100-year-old industries like telecom, automotive, finance, energy, healthcare, etc. So, that’s kind of one big announcement where vertical industries have taken advantage of open source.
The second announcement was easy enough: Google Cloud joins Linux Foundation Networking as a partner. That announcement comes on the basis of the telecom market and the cloud market converging together and building on each other.
The third major announcement was a project under LF Networking. If you remember, two years ago, a project collaboration with GSMA was started. It was called CNTT, which really defined and narrowed the scope of interoperability and compliance. And we have OPNFV under LFN. What we announced at Open Networking and Edge summit is the two projects are going to come together. This would be fantastic to a global community of operators who are simplifying the deployment and interoperability of implementation of NFVI manual VNFs and CNFs.
The next announcement was around a research study that we released on open source code that was created by Linux Foundation Networking, using LFN analytics and COCOMO estimation. We’re talking $7.2 billion worth of IP investment, right? This is the power of shared technology.
And finally, we released a survey on the Edge community asking them, “Why are you contributing to open source?” And the answer was fascinating. It was all-around innovation, speed to deployment, market creation. Yes, cost was important, but not initially.
So those were the 5 big highlights of the show from an LFN and LFH perspective.
Swapnil Bhartiya: There are two things that I’m interested in. One is the consolidation that you talk about, and the second is survey. The fact is that everybody is using open source. There is no doubt about it. But the problem that is happening is since everybody’s using it, there seems to be some gap between the awareness of how to be a good open source citizen as well. What have you seen in the telco space?
Arpit Joshipura: First of all, 5 years ago, they were all using black box and proprietary technologies. Then, we launched a project called OpenDaylight. And of course, OpenDaylight announced its 13th release today, but that’s kind of on their 6-year anniversary from being proprietary to today in one of the more active projects called ONAP. The telcos are 4 of the Top 10 contributors of source code and open source, right? Who would have imagined that an AT&T, Verizon, Amdocs, DT, Vodafone, and a China mobile and a China telecom, you name it are all actively contributing code? So that’s a paradigm shift in terms of not only consuming it, but also contributing towards it.
Swapnil Bhartiya: And since you mentioned ONAP, if I’m not wrong, I think AT&T released its own work as E-com. And then the projects within the Foundation were merged to create ONAP. And then you mentioned actually NTD. So, what I want to understand from you is how many projects are there that you see within the Foundation? The problem is that Linux Foundation and all those other foundations are open servers. It’s a very good place for those products to come in. It’s obvious that there will be some projects that will overlap. So what is the situation right now? Where do you see some overlap happening and, at the same time, are there still gaps that you need to fill?
Arpit Joshipura: So that’s a question of the philosophies of a foundation, right? I’ll start off with the most loose situation, which is GitHub. Millions and millions of projects on GitHub. Any PhD student can throw his code on GitHub and say that’s open source and at the end of the day, if there’s no community around it, that project is dead. Okay. That’s the most extreme scenario. Then, there are foundations like CNCF who have a process of accepting projects that could have competing solutions. May the best project win.
From an LF Networking and LFH perspective, the process is a little bit more restrictive: there is a formal project life cycle document and a process available on the Wiki that looks at the complementary nature of the project, that looks at the ecosystem, that looks at how it will enable and foster innovation. Then based on that, the governing board and the neutral governance that we have set up under the Linux Foundation, they would approve it.
Overall, it depends on the philosophy for LFN and LFH. We have 8 projects each in the umbrella, and most of these projects are quite complementary when it comes to solving different use cases in different parts of the network.
Swapnil Bhartiya: Awesome. Now, I want to talk about 5G a bit. I did not hear any announcements, but can you talk a bit about what is the word going on to help the further deployment of 5G technologies?
Arpit Joshipura: Yeah. I’m happy and sad to say that 5G is old news, right? The reality is all of the infrastructure work on 5G already was released earlier this year. So ONAP Frankfurt release, for example, has a blueprint on 5G slicing, right? All the work has been done, lots of blueprint and Akraino using 5G and mech. So, that work is done. The cities are getting lit up by the carriers. You see announcements from global carriers on 5G deployments. I think there are 2 missing pieces of work remaining for 5G.
One is obviously the O-RAN support, right? The O-RAN software community, which we host at the Linux Foundation also is coming up with a second release. And, all the support for 5G is in there.
The second part of 5G is really the compliance and verification testing. A lot of work is going into CMTT and OPN and feed. Remember that merge project we talked about where 5G is in context of not just OpenStack, but also Kubernetes? So the cloud-native aspects of 5G are all being worked on this year. I think we’ll see a lot more cloud-native 5G deployments next year primarily because projects like ONAP or cloud native integrate with projects like ONAP and Anthos or Azure stack and things like that.
Swapnil Bhartiya: What are some of the biggest challenges that the telco industry is facing? I mean, technically, no externalization and all those things were there, but foundations have solved the problem. Some rough ideas are still there that you’re trying to resolve for them.
Arpit Joshipura: Yeah. I think the recent pandemic caused a significant change in the telcos’ thinking, right? Fortunately, because they had already started on a virtualization and open-source route, you heard from Android, and you heard from Deutsche Telekom, and you heard from Achronix, all of the operators were able to handle the change in the network traffic, change in the network, traffic direction, SLS workloads, etc., right? All because of the softwarization as we call it on the network.
Given the pandemic, I think the first challenge for them was, can the network hold up? And the answer is, yes. Right? All the work-from-home and all these video recordings, we have to hang out with the web, that was number one.
Number two is it’s good to hold up the network, but did I end up spending millions and millions of dollars for operational expenditures? And the answer to that is no, especially for the telcos who have embraced an open-source ecosystem, right? So people who have deployed projects like SDN or ONAP or automation and orchestration or closed-loop controls, they automatically configure and reconfigure based on workloads and services and traffic, right? And that does not require manual labor, right? Tremendous amounts of costs were saved from an opex perspective, right?
For operators who are still in the old mindset have significantly increased their opex, and what that has caused is a real strain on their budget sheets.
So those were the 2 big things that we felt were challenges, but have been solved. Going forward, now it’s just a quick rollout/build-out of 5G, expanding 5G to Edge, and then partnering with the public cloud providers, at least, here in the US to bring the cloud-native solutions to market.
Swapnil Bhartiya: Awesome. Now, Arpit, if I’m not wrong, LF Edge is I think, going to celebrate its second anniversary in January. What do you feel the product has achieved so far? What are its accomplishments? And what are some challenges that the project still has to tackle?
Arpit Joshipura: Let me start off with the most important accomplishment as a community and that is terminology. We have a project called State of the Edge and we just issued a white paper, which outlines terminology, terms and definitions of what Edge is because, historically, people use terms like thin edge, thick edge, cloud edge, far edge, near edge and blah, blah, blah. They’re all relative terms. Okay. It’s an edge in relation to who I am.
Instead of that, the paper now defines absolute terms. If I give you a quick example, there are really 2 kinds of edges. There’s a device edge, and then there is a service provider edge. A device edge is really controlled by the operator, by the end user, I should say. Service provider edge is really shared as a service and the last mile typically separates them.
Now, if you double click on each of these categories, then you have several incarnations of an edge. You can have an extremely constrained edge, microcontrollers, etc., mostly manufacturing, IIoT type. You could have a smart device edge like gateways, etc. Or you could have an on-prem silver type device edge. Either way, an end user controls that edge versus the other edge. Whether it’s on the radio-based stations or in a smart central office, the operator controls it. So that’s kind of the first accomplishment, right? Standardizing on terminology.
The second big Edge accomplishment is around 2 projects: Akraino and EdgeX Foundry. These are stage 3 mature projects. They have come out with significant [results]. Akraino, for example, has come out with 20 plus blueprints. These are blueprints that actually can be deployed today, right? Just to refresh, a blueprint is a declarative configuration that has everything from end to end to solve a particular use case. So things like connected classrooms, AR/VR, connected cars, right? Network cloud, smart factories, smart cities, etc. So all these are available today.
EdgeX is the IoT framework for an industrial setup, and that’s kind of the most downloaded. Those 2 projects, along with Fledge, EVE, Baetyl, Home Edge, Open Horizon, security advanced onboarding, NSoT, right? Very, very strong growth over 200% growth in terms of contributions. Huge growth in membership, huge growth in new projects and the community overall. We’re seeing that Edge is really picking up great. Remember, I told you Edge is 4 times the size of the cloud. So, everybody is in it.
Swapnil Bhartiya: Now, the second part of the question was also some of the challenges that are still there. You talked about accomplishment. What are the problems that you see that you still think that the project has to solve for the industry and the community?
Arpit Joshipura: The fundamental challenge that remains is we’re still working as a community in different markets. I think the vendor ecosystem is trying to figure out who is the customer and who is the provider, right? Think of it this way: a carrier, for example, AT&T, could be a provider to a manufacturing factory, who actually could consume something from a provider, and then ship it to an end user. So, there’s like a value shift, if you may, in the business world, on who gets the cut, if you may. That’s still a challenge. People are trying to figure out, I think people who are going to be quick to define, solve and implement solutions using open technology will probably turn out to be winners.
People who will just do analysis per analysis will be left behind like any other industry. I think that is kind of fundamentally number one. And number two, I think the speed at which we want to solve things. The pandemic has just accelerated the need for Edge and 5G. I think people are just eager to get gaming with low latency, get manufacturing, predictive maintenance with low latency, home surveillance with low latency, connected cars, autonomous driving, all the classroom use cases. They should have been done next year, but because of the pandemic, it just got accelerated.
The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the availability of a new training course, LFS267 – Jenkins Essentials.
LFS267, developed in conjunction with the Continuous Delivery Foundation, is designed for DevOps engineers, Quality Assurance personnel, SREs as well as software developers and architects who want to gain expertise with Jenkins for their continuous integration (CI) and continuous delivery (CD) activities.
Akraino is an open-source project designed for the Edge community to easily integrate open source components into their stack. It’s a set of open infrastructures and application blueprints spanning a broad variety of use cases, including 5G, AI, Edge IaaS/PaaS, IoT, for both provider and enterprise Edge domains. We sat down with Tina Tsou, TSC Co-Chair of the Akraino project to learn more about it and its community.
Here is a lightly edited transcript of the interview:
Swapnil Bhartiya: Today, we have with us Tina Tsou, TSC Co-Chair of the Akraino project. Tell us a bit about the Akraino project.
Tina Tsou: Yeah, I think Akraino is an Edge Stack project under Linux Foundation Edge. Before Akraino, the developers had to go to the upstream community to download the upstream software components and integrate in-store to test. With the blueprint ideas and concept, the developers can directly do the use-case base to blueprint, do all the integration, and [have it] ready for the end-to-end deployment for Edge.
Swapnil Bhartiya: The blueprints are the critical piece of it. What are these blueprints and how do they integrate with the whole framework?
Tina Tsou: Based on the certain use case, we do the community CI/CD ( continuous integration and continuous deployment). We also have proven security requirements. We do the community lab and we also do the life cycle management. And then we do the production quality, which is deployment-ready.
Swapnil Bhartiya: Can you explain what the Edge computing framework looks like?
Tina Tsou: We have four segments: Cloud, Telco, IoT, and Enterprise. When we do the framework, it’s like we have a framework of the Edge compute in general, but for each segment, they are slightly different. You will see in the lower level, you have the network, you have the gateway, you have the switches. In the upper of it, you have all kinds of FPGA and then the data plan. Then, you have the controllers and orchestration, like the Kubernetes stuff and all kinds of applications running on bare metal, virtual machines or the containers. By the way, we also have the orchestration on the site.
Swapnil Bhartiya: And how many blueprints are there? Can you talk about it more specifically?
Tina Tsou: I think we have around 20-ish blueprints, but they are converged into blueprint families. We have a blueprint family for telco appliances, including Radio Edge Cloud, and SEBA that has enabled broadband access. We also have a blueprint for Network Cloud. We have a blueprint for Integrated Edge Cloud. We have a blueprint for Edge Lite IoT. So, in this case, the different blueprints in the same blueprint family can share the same software framework, which saves a lot of time. That means we can deploy it at a large scale.
Swapnil Bhartiya: The software components, which you already talked about in each blueprint, are they all in the Edge project or there are some components from external projects as well?
Tina Tsou: We have the philosophy of upstream first. If we can find it from the upstream community, we just directly take it from the upstream community and install and integrate it. If we find something that we need, we go to the upstream community to see whether it can be changed or updated there.
Swapnil Bhartiya: How challenging or easy it is to integrate these components together, to build the stack?
Tina Tsou: It depends on which group and family we are talking about. I think most of them at the middle level of middle are not too easy, not too complex. But the reference has to create the installation, like the YAML files configuration and for builds on ISO images, some parts may be more complex and some parts will be easy to download and integrate.
Swapnil Bhartiya: We have talked about the project. I want to talk about the community. So first of all, tell us what is the role of TSC?
Tina Tsou: We have a whole bunch of documentation on how TSA runs if you want to read. I think the role for TSC is more tactical steering. We have a chair and co-chair, and there are like 6-7 subcommittees for specific topics like security, technical community, CI and documentation process.
Swapnil Bhartiya: What kind of community is there around the Akraino project?
Tina Tsou: I think we have a pretty diverse community. We have the end-users like the telcos and the hyperscalers, the internet companies, and also enterprise companies. Then we have the OEM/ODM vendors, the chip makers or the SoC makers. Then have the IP companies and even some universities.
Swapnil Bhartiya: Tina, thank you so much for taking the time today to explain the Akraino project and also about the blueprints, the community, and the roadmap for the project. I look forward to seeing you again to get more updates about the project.
Tina Tsou: Thank you for your time. I appreciate it.
COBOL is powering most of the critical infrastructure that involves any kind of monetary transaction. In this special interview conducted during the recent Open Mainframe Summit, we talked about the relevance of COBOL today and the role of the new COBOL working group that was announced at the summit. Joining us were Cameron Seay, Adjunct Professor at East Carolina University and Derek Britton of the Application Modernizing Group at Micro Focus. Micro Focus recently joined the Open Mainframe Project and is now also involved with the working group.
Here is an edited version of the discussion:
Swapnil Bhartiya: First of all, Cam and Derek, welcome to the show. If you look at COBOL, it’s very old technology. Who is still using COBOL today? Cam, I would like to hear your insight first.
Cameron Seay: Every large commercial bank I know of uses COBOL. Every large insurance company, every large federal agency, every large retailer uses COBOL to some degree, and it processes a large percentage of the world’s financial transactions. For example, if you go to Walmart and you make a sale, that transaction is probably recorded using a COBOL program. So, it’s used a lot, a large percentage of the global business is still done in COBOL.
Swapnil Bhartiya: Micro Focus is I think one of the few companies that offer support around COBOL. Derek, please tell people the importance of COBOL in today’s modern world.
Derek Britton: Well, if we go back in time, there weren’t that many choices on the market. If you wanted robust technology to build your business systems, COBOL was one of the very few choices and so it’s surprising when there are so many choices around today and yet, many of the world’s largest industries, largest organizations still rely on COBOL. If COBOL wasn’t being used, so many of those systems that people trust and rely on — whether you’re moving money around, whether you’re running someone’s payroll, whether you’re getting insurance quotation, shipping a parcel, booking a holiday. All of these things are happening with COVID at the backend and the value you’re getting from that is not just that it’s carried on, but it runs with the same results again and again and again, without fail.
The importance of COBOL is not just its pervasiveness, which I think is significant and perhaps not that well understood, but also it’s reliability. And because it’s welded very closely to the mainframe environments and to CICS and some other core elements of the mainframe and other platforms as well. It uses and trusts a lot of technology that is unrivaled in terms of its reliability, scalability and its performance. That’s why it remains so important to the global economy and to so many industries. It does what it needs to do, which is business processing, so fantastically well.
Swapnil Bhartiya: Excellent, thanks for talking about that. Now, you guys recently joined the project and the foundation as well, so talk about why you joined the Open Mainframe Project and what are the projects that you will be involved with, of course. I know you’re involved with the working group, but talk about your involvement with the project.
Derek Britton: Well, our initial interest with the Open Mainframe Project goes back a couple of years. We’re longtime proponents of the mainframe platform, of course, here at Micro Focus. We’ve had a range of technologies that run on z/OS. But our interest in the wider mainframe community—and that would be the Open Mainframe Project—probably comes as a result of the time we’ve spent with the SHARE community and other IBM-sponsored communities, where the discussion was about the best way to embrace this trusted technology in the digital era. This is becoming a very topical conversation and that’s also true for COBOL, which I’m sure we’ll come back to.
Our interest in the OMP has been going on for the last couple of years and we were finally able to reach an agreement between both organizations to join the group this year, specifically because of a number of initiatives that we have going on at Micro Focus and that a number of our customers have talked to us about specifically in the area of mainframe DevOps. As vital as the mainframe platform is, there’s a growing desire to use it to deliver greater and greater value to the business, which typically means trying to accelerate delivery cycles and get more done.
Of course, now the mainframe is so inextricably connected with other parts of the IT ecosystem that those points of connection and the number of moving parts have to be handled, integrated with, and managed as part of a delivery process. It’s an important part of our customers’ roadmap and, therefore, our roadmap to ensure that they get the very best of technology in the mainframe world. Whether it’s tried-and-trusted technology, whether it’s new emerging vendor technology, or whether in many cases, it becomes open source technology. We wanted to play our part in those kinds of projects and a number of initiatives around.
Swapnil Bhartiya: Is there an increase in interest in COBOL that we are seeing there now that there is a dedicated working group? And if you can also talk a bit about what will be the role of this group.
Cameron Seay: If your question was, is there an increased interest in working in COBOL because of the working group, the working group actually came as a result of a renewed interest in the written new discovery in COBOL. The governor of New Jersey made a comment that their unemployment was not able to be processed because of COBOL’s obsolescence, or inefficiency, or inadequacy to some degree. And that sparked quite a furor in the mainframe world because it wasn’t COBOL at all. COBOL had nothing to do with the inability of New Jersey to deliver the unemployment checks. Further, we’re aware that New Jersey is just typical of every state. Every state that I know of—there may be some exceptions I’m not aware of, I know it’s certainly true for California and New York—is dependent upon COBOL to process their day-to-day business applications.
So, then Derek and some other people inside the OMP got together and started having some conversations, myself included, and said “We maybe need to form a COBOL working group to renew this interest in COBOL and establish the facts around COBOL.” So that’s kind of what the working group is trying to do, and we’re trying to increase that familiarity, visibility and interest in COBOL.
Swapnil Bhartiya: Derek, I want to bring the same question to you also. Is there any particular reason that we are seeing an increase in interest in COBOL and what is that reason?
Derek Britton: Yeah, that’s a great question and I think there are a few reasons. First of all, I think a really important milestone for COBOL was actually last year when it turned 60 years old. I think one of your earlier questions is related to COBOL’s age being 60. Of course, COBOL isn’t a 60-year-old language but the idea is 60 years old, for sure. If you drive a 2020 motor car, you’re driving a 2020 motor car, you’re not driving a hundred-year-old idea. No one thinks a modern telephone is an old idea, either. It’s not old technology, sorry.
The idea might’ve been from a long time ago, but the technology has advanced, and the same thing is true in code. But when we celebrated COBOL’s 60th anniversary last year—a few of the vendors did and a number of organizations did, too—there was an outpouring of interest in the technology. A lot of times, COBOL just quietly goes about its business of running the world’s economy without any fuss. Like I said, it’s very, very reliable and it never really breaks. So, it was never anything to talk about. People were sort of pleasantly surprised, I think, to learn of its age, to learn of the age of the idea. Now, of course, Micro Focus and IBM and some of the other vendors continue to update and adapt COBOL so that it continues to evolve and be relevant today.
It’s actually a 2020 technology rather than a 1960 one, but that was the first one. Secondly, the pandemic caused a lot of businesses to have to change how they process core systems and how they interact with their customers. That put extra strain on certain organizations or certain government agencies and, in a couple of cases, COBOL was incorrectly made the scapegoat for some of the challenges that those organizations face, whether it was a skills issue or whether it was a technology issue. Under the cover, COBOL was working just fine. So the interest has been positive regarding the anniversary, but I think the reports have been inaccurate and perhaps a little unkind about COBOL. Those were the two reasons they came together.
I remember when I first spoke to Cam and to some of the other people on the working group, you said it was a very good idea once and for all that we told the truth about COBOL, that the industry finally understood how viable it is, how valuable it is, based on the facts behind COBOL’s usage. So one of the things we’re going to do is try to quantify and qualify as best we can, how widely COBOL is used, what do you use it for, who is using, and then present a more factual story about the technology so people can make a more informed decision about technical strategy. Rather than base it on hearsay or some reputation about something being a bit rusty and out-of-date, which is probably the reputation that’s being espoused by someone who would have you replace it with something else, and their motivation might be for different reasons. There’s nothing wrong with COBOL and it’s very, very viable and our job I think really is to tell that truth and make sure people understand it,
Swapnil Bhartiya: What other projects, efforts, or initiatives are going on there at the Linux Foundation or Open Mainframe Project around COBOL? Can you talk about that?
Cameron Seay: Well, certainly. There is currently a course being developed by folks in the community who have developed an online course in COBOL. It’s the rudiments of it. It’s for novices, but it’s great for a continuing education program. So, that’s one of the things going on around COBOL. Another thing is there’s a lot going on in mainframe development in the OMP now. There’s an application framework that has been developed called Zoe that will allow you to develop applications for z/OS. It’s interesting that the focus of the Open Mainframe Project when it first began was Linux on the mainframe, but actually the first real project that came out of it was a z/OS-based product, Zoe, and so we’re interested in that, too. Those are just a couple of peripheral projects that the COBOL working group is going to work with.
There are other things we want to do from a curriculum standpoint down the road, but fundamentally, we just want to be a fact-finding, fact-gathering operation first, and Derek Britton has been taking leadership and putting together a substantial reference list so that we can get the facts about COBOL. Then, we’re going to do other things, but that we want to get that right first.
Swapnil Bhartiya: So there are as you mentioned a couple of projects. Is there any overlap between these projects or how different they are? Do they all serve a different purpose? It looks like when you’re explaining the goal and role of the working group, it sounds like it’s also the training or education group with the same kind of activities. Let me rephrase it properly: what are some of the pressing needs you see for the COBOL community, how are these efforts/groups are trying to help them, and how are they not overlapping or stepping on each other’s toes?
Cameron Seay: That’s an ongoing thing. Susharshna and I really work hard to make sure that we’re not working at either across purposes or there’s duplication of effort. We’re kind of clear about our roles. For the world at large, for the public at large, the working group—and Derek may have a different view on this because we all don’t think alike, we all don’t see this thing exactly the same—but I see it as information first. We want people to get accurate current information about COBOL.
Then, we want to provide some vehicle that COBOL can be reintroduced back into the general academic curriculum because it used to be. I studied COBOL at a four-year university. Most people did when they took programming in the ’80s and the ’90s, they took COBOL, but that’s not true anymore. Our COBOL course at East Carolina this semester is the only COBOL course in the entire USC system. That’s got to change. So information, exposure, accurate information exposure, and some kind of return to the general curriculum, those are the three things that we we can provide to the community at large.
Swapnil Bhartiya: If you look at Micro Focus, you are working in the industry, you are actually solving the problem for your customers. What role do these groups or other efforts that are going on there play for the whole ecosystem?
Derek Britton: Well, I think if we go back to Cam’s answer, I think he’s absolutely right that the industry, if you project forward another generation in 25 years’ time who are going to be managing these core business systems that currently still need to run the world’s largest organizations. I know we’re in a digital era and I know that things are changing at an unprecedented pace, but most of the world’s largest organizations, successful organizations still want to be in those positions in generations to come. So who is it? Who are those practitioners that are coming through the education system right now, who are going to be leaders in those organizations’ IT departments in the future?
And there is a concern not just for COBOL, but actually, many IT skills across the board. Is there going to be enough talent to actually run the organizations of the future? And that’s true, it’s a true question mark about COBOL. So Micro Focus, which has its own academic initiative and its own training program as does IBM as do many of the other vendors, we all applaud the work of all community groups. The OMP is obviously a fabulous example because it is genuinely an open group. Genuinely, it’s a meritocracy of people with good ideas coming together to try to do the right thing. We applaud the efforts to ensure that there continues to be enough supply of talented IT professionals in the future to meet the growing and growing demand. IT is not going away. It’s going to become strategically more and more important to these organizations.
Our part to play in Micro Focus is really to work shoulder-to-shoulder with organizations like the OMP because between us, we will create enough groundswell of training and opportunity for that next generation. Many people will tell you there just isn’t enough of that training going on and there aren’t enough of those opportunities available, even though one survey that Micro Focus ran last year on the back of the COBOL’s 60th anniversary suggests that around 92% of all application owners of COBOL systems confirmed that those applications remain strategic to their organization. So, if the applications are not going anywhere, who’s going to be looking after them in the next generation? And that’s the real challenge that I think the industry faces as a whole, which is why Micro Focus is so committed to get behind the wheel of making sure that we can make a difference.
Swapnil Bhartiya: We discussed that the interest in COBOL is increasing as COBOL is playing a very critical role in the modern economy. What kind of future do you see for COBOL and where do you see it going? I mean, it’s been around for 60 years, so it knows how to survive through times. Still, where do you see it go? Cam, I would love to start with you.
Cameron Seay: Yeah, absolutely. We are trying to estimate how much COBOL is actually in use. That estimate is running into hundreds of billions of lines of code. I know that, for example, Bank of America admits to at least 50 million lines of COBOL code. That’s a lot of COBOL, and you’re not going to replace it over time, there’s no reason to. So the solution to this problem, and this is what we’re going to do, is we’re going to figure out a way to teach people COBOL. It’s not a complex language to learn. Any organization that sees lack of COBOL skills as an impediment and justification to move to another platform is [employing] a ridiculous solution, that solution is not feasible. If they try to do that, they’re going to fail because there’s too much risk and, most of all, too much expense.
So, we’re going to figure out a way to begin to teach people COBOL again. I do it, a COBOL class at East Carolina. That is a solution to this problem because the code’s not going anywhere nor is there a reason for it to go anywhere, it works! It’s a simple language, it’s as fast as it needs to be, it’s as secure as it needs to be, and no one that I’ve talked to, computer scientists all over the world, no one can give me any application, that any language is going to work better than COBOL. There may be some that work as good or nearly as good, but you’re going to have to migrate them, but there’s nothing, there’s no improvement that you can make on these applications from a performance standpoint and from a security standpoint. The applications are going to stay where they are, and we’re just going to have to teach people COBOL. That’s the solution, that’s what’s going to happen. How and when, I don’t know, but that’s what’s going to happen.
Swapnil Bhartiya: If you look at the crisis that we were going through, almost everything, every business is moving online to the cloud. All those transactions that people are already doing in person are all moving online, so it has become critical. From your perspective, what kind of future do you see?
Derek Britton: Well, that’s a great question because the world is a very, very different place to how architecture was designed however long ago. Companies of today are not using that architecture. So there is some question mark there about what’s COBOL’s future. I agree with Cam. Anyone that has COBOL is not necessarily going to be able to throw that away anytime soon because, frankly, it might be difficult. It might be easy, but that’s not really the question, is it? Is it a good business decision? The answer is it’s a terrible business decision to throw it away.
In addition to that, I would contend that there are a number of modern-day digital use cases where actually the usage of COBOL is going to increase rather than decrease. We see this all the time with our larger organizations who are using it for pretty much the whole of the backend of their core business. So, whether it’s a banking organization or an insurer or a logistics company, what they’re trying to do obviously is find new and exciting business opportunities.
But, upon which they will be basing their core business systems that already run most of the business today, and then trying to use that to adapt, to enhance, to innovate. There are insurers who are selling the insurance quotation system to other smaller insurances as a service. Now, of course, their insurance quotation system is probably the version that isn’t quite as quick as the one that runs on their mainframe, but they’re making that available as a service to other organizations. Banking organizations are doing much the same thing with a range of banking services, maybe payment systems. These are all services that can be provided to other organizations.
The same is true in the ISB market where really, really robust COBOL-based financial services, packages, ERP systems, which are COBOL based, and they have been made available as cloud-based as-a-service packages or upon other platforms to meet new market needs. The thing about COBOL that few people understand is not only is it easy to learn, but it’s easy to move to somewhere else. So, if your client is now running Linux and it says, “Well, now I want it to run these core COBOL business systems there, too.” Well, maybe they’ve taken a move to AIX to a Power system, but the same COBOL system can be reused, replicated as necessary, which is a little known secret about the language.
This goes back to the original design, of course. Back in the day, there was no such thing as the “standard platform” in 1960. There wasn’t a single platform that you could reasonably rely on that would give you a decent answer, not very quickly anyway. So, in order for us to know that COBOL works, we have to have the same results compiled about running on different machines. It needs to be the same result running at the same speed, and from that point, that’s when the portability of the system came to life. That’s what they set out to do, built that way by design.
Swapnil Bhartiya: Cam, Derek, thank you so much for taking the time out today to talk about COBOL, how important it is in today’s world. I’m pretty sure that when we spend our whole day, some of the activities that we have done online touch COBOL or are powered by COBOL.
This week has been “a week of millions” for the Linux Foundation, with our announcement that over 1 million people have taken our free Introduction to Linux course. As part of the research for our recently published 2020 Linux Kernel History Report, the Kernel Project itself determined that it had surpassed one million code commits. Here is how we established the identity of this lucky Kernel Project contributor.
Methodology:
The historical repo of BitKeeper (converted to Git) has 63,428 commits. We then found the merge at which Linus Torvalds’ repo has at least 936,572 commits (his repo has at least this many commits).
At commit 92c59e126b21fd212195358a0d296e787e444087 the repo had 936,456 commits (116 shy of the million)
So on merge 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc Linus’ repo passed the 1M mark (to be precise, 1,000,533 including BitKeeper commits):
commit 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc 92c59e126b21fd212195358a0d296e787e444087 f510ca05271b6f71bd532fe743b39f628110223f (HEAD)Merge: 92c59e126b21 f510ca05271bAuthor: Linus Torvalds <torvalds@linux-foundation.org>Date: Mon Aug 3 19:19:34 2020 -0700Merge tag 'arm-dt-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
At this point, we can simply list the 936,572nd commit in the log:
>git log --oneline | tail -936572 | head -185b23fbc7d88 x86/cpufeatures: Add enumeration for SERIALIZE instruction
And the committer is…
git log -1 85b23fbc7d88commit 85b23fbc7d88f8c6e3951721802d7845bc39663dAuthor: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>Date: Sun Jul 26 21:31:29 2020 -0700 x86/cpufeatures: Add enumeration for SERIALIZE instruction
Ricardo’s momentous commit to the Kernel was to add enumeration support for the SERIALIZE instruction, supported in Intel’s forthcoming Sapphire Rapids and Alder Lake microarchitectures for their 10-nanometer server and workstation chips. Ricardo is a software engineer who has been working on Linux feature support for Intel’s microprocessors for 12 years as part of the company’s CPU enabling team.
This week has been “a week of millions” for the Linux Foundation, with our announcement that over 1 million people have taken our free Introduction to Linux course. As part of the research for our recently published 2020 Linux Kernel History Report, the Kernel Project itself determined that it had surpassed one million code commits. Here is how we established the identity of this lucky Kernel Project contributor.
Methodology:
The historical repo of BitKeeper (converted to Git) has 63,428 commits. We then found the merge at which Linus Torvalds’ repo has at least 936,572 commits (his repo has at least this many commits).
At commit 92c59e126b21fd212195358a0d296e787e444087 the repo had 936,456 commits (116 shy of the million)
So on merge 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc Linus’ repo passed the 1M mark (to be precise, 1,000,533 including BitKeeper commits):
commit 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc 92c59e126b21fd212195358a0d296e787e444087 f510ca05271b6f71bd532fe743b39f628110223f (HEAD)Merge: 92c59e126b21 f510ca05271bAuthor: Linus Torvalds <torvalds@linux-foundation.org>Date: Mon Aug 3 19:19:34 2020 -0700Merge tag 'arm-dt-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
At this point, we can simply list the 936,572nd commit in the log:
>git log --oneline | tail -936572 | head -185b23fbc7d88 x86/cpufeatures: Add enumeration for SERIALIZE instruction
And the committer is…
git log -1 85b23fbc7d88commit 85b23fbc7d88f8c6e3951721802d7845bc39663dAuthor: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>Date: Sun Jul 26 21:31:29 2020 -0700 x86/cpufeatures: Add enumeration for SERIALIZE instruction
Ricardo’s momentous commit to the Kernel was to add enumeration support for the SERIALIZE instruction, supported in Intel’s forthcoming Sapphire Rapids and Alder Lake microarchitectures for their 10-nanometer server and workstation chips. Ricardo is a software engineer who has been working on Linux feature support for Intel’s microprocessors for 12 years as part of the company’s CPU enabling team.
August was a historic month for Linux. The largest open source project on the planet enjoyed its one-millionth code commit. The honor goes to Ricardo Neri, the Linux kernel engineer at Intel. Swapnil Bhartiya, founder, and host at TFiR sat down with Neri on behalf of the Linux Foundation to discuss Neri’s journey and involvement with the Linux kernel community.
A lightly edited transcript of the interview:
Swapnil Bhartiya: Hi, this is Bhartiya, on behalf of the Linux Foundation, and today we have with us Ricardo Neri, Linux Software Engineer at Intel, whose code contribution has become 1 millionth contribution to the Linux kernel.
Ricardo Neri: Hi, thank you. Thank you very much.
Swapnil Bhartiya:Ricardo, tell us a little bit about yourself, your journey. When was the first time you came in contact with open source or Linux in general?
Ricardo Neri: That was, I think in 2008. It was a time when the iPhone came out and at the time I used to work in Symbian, but then because the iPhone came out, Symbian died. So I was transferred to a new team, which was working on audio drivers for Linux and the VS. So, maybe by chance I landed on that team and that’s how I started 12 years ago.
Swapnil Bhartiya:You started contributing to the kernel as part of your organization, but you had personal interactions with the kernel community. How was that interaction?
Ricardo Neri: It was very daunting because I had heard that it was really hard to convince maintainers to take your code. And also, I don’t know, maybe intimidating because the people in the community were very smart and also, they had strong opinions for various things. So yeah, maybe I’d say it was intimidating but exciting at the same time.
Swapnil Bhartiya:How have you seen the community itself evolve over time?
Ricardo Neri: Just building on my previous comment I saw at the time that maintainers, they care deeply about the quality of the code that may be drove them to make harsh comments on code from people. And maybe that was a barrier for new people to start contributing. But I have seen a change in the last years like a new code of conduct and rules are agreed upon for people, maybe if they are hesitant or they are not so sure about the quality of their code, just to take it out there and they will not have such a harsh reply as it used to be in the early years when I joined the open source community. I think that is a change that I have observed. Another change that I have observed is more companies are now embracing Open Source. In the early days, the industry was still dominated by closed source software but now I have seen companies building more and more business models around open source software, where the value of the product is not software, but the things that you do with it.
Swapnil Bhartiya:What is interesting is that the contributions to the kernel are coming from all around the globe. You don’t have to be in a specific place to become part of the project. So, what role do you think Linux has played in democratizing software development where you don’t have to prove yourself before you get involved. You send a patch. If the patch is good, they will take your patch. If it’s not good, they will not take it. They don’t have to look at your resume or CV that, hey, have you done any work before or not? So how much role has Linux played in democratizing software development itself?
Ricardo Neri: Yeah, I think it has played a big role because as you said, you don’t have to have a college degree or a computer science degree to start contributing to it because the currency, as you say, is a quality of code. So I have seen, myself, I am not a computer scientist or a software engineer. My background is electrical engineering. So probably I can be a good example of that. I didn’t need to go to college for five years and study computer science to start contributing. Anyone, with the interest to learn and to do something, can start contributing. I am not the only example. There are other people who have a biology degree and they now have become key contributors to Linux.
You can just go to the Linux kernel mailing list, read all the patches, maybe contribute your own reviews. And maybe you start sending your patch. All you need is essentially a workstation with the compiler and the source code. And you can find a bug or an improvement, and you can just do it. You don’t need anything more on that.
Swapnil Bhartiya:Yeah. I fully agree with you. Have you attended any of these Linux Plumbing or any other conferences and events?
Ricardo Neri: Yes, actually I was just attending the Linux Plumbers Conference a few hours ago. I was in the power management micro-conference. Yes, and in previous years I have also been attending Open Source Summit, which used to be LinuxCon.
Swapnil Bhartiya:When you interact with the kernel community over email, it is a bit daunting and you feel intimidated because you don’t know how they will respond to the patch. But when you go and meet these developers in-person, when you sit down for either breakfast or for beer in the evening, you suddenly find that they are as human as we are. So, when you meet them in person, how does the chemistry, the trust, the relationship changes?
Ricardo Neri: Yeah, that is very true. Because, as you said, if you interact with these people only through the mailing list, you can only see words without any context of it. And as you said, this is prone to misinterpretation on both sides. But as equally as you said, when you meet with them, maybe in a virtual event or in person, you see that they are actually friendly. They do care about the quality of the code, but they are approachable and friendly in my experience. And that is also the experience that I have heard from all of my coworkers, who are also new to this community. They have similar feedback as I do.
Swapnil Bhartiya: Do you have any interesting anecdotes to share from any of these events, which are like, “Hey, I met that person or this person or we just sat down. We were debating for months over the patch. We sat down and suddenly we saw the solution.” Any interesting news story that you have to share?
Ricardo Neri: I noticed just in this Plumbers Conference this year, that discussing things over the mailing list can take time because you need to put your comments in written form and then wait for the answer and so forth. And have several iterations of that process. But if you sit down in a room or in a virtual room, the conversation is more fluid and faster. You can arrive at conclusions or to designs or to agreements that would otherwise take maybe weeks for a month to do in the mailing list. So, yeah, I think I have noticed that.
Swapnil Bhartiya:Let’s talk about your contribution. What was this code contribution that historically became a 1 millionth contribution?
Ricardo Neri: That is related to the work that I do with Intel, in which I am part of the CPU enabling team. Whenever Intel comes up with a new feature in the processor, our team is responsible for taking that new feature and making it consumable by the Linux kernel. In this particular case, this is for a new instruction called SERIALIZE, which essentially serialized execution of the code. It puts a landmark in which all the execution before that instruction gets done, before starting to execute the code after that instruction. And that was solving problems that we had in the past. Because for instance, you can achieve the same goal using an instruction called CPUID or return from interrupt. But those instructions have certain side effects and can also have a performance penalty. So this serialize instruction allows you to divide the execution of code, but without having those side effects that you will need to fix up in the software. So it helps to make the software simpler and you have a performance bonus side of it as well.
Swapnil Bhartiya:Do you contribute code in your capacity as an Intel engineer, or do you also contribute some code in your free time as well?
Ricardo Neri: Right now, I am only contributing coding in the capacity as an Intel engineer.
Swapnil Bhartiya:So, this is the reason I ask this question is that in the early days of open source most of the contribution was coming through people working in their spare time, but today a majority of contributors are getting paid by companies to do that work. Working on Open Source is no longer a part-time hobby. How have you seen this change itself, where you get paid to work on open source?
Ricardo Neri: That’s very true. As I was mentioning earlier that now companies have found ways to build business models around open source software. A good example is Red Hat where the software is free, but they build their business around the software and not regard the software as the product; it’s a vehicle to deliver value to their customers. And the same is true for semiconductors companies such as Intel, which are in the business of selling computer chips. But today, you cannot just only sell the chip. You also need to provide a full solution to the customer. And that, of course, includes the software. And that is also true for other companies that were able to build business models around open source software.
In my early days when I was new to Linux, I had many, many colleagues that were in Linux because they believed in it. They believed in the value of open source software. Then they happen to stumble on a job that they were paid to do the things in which they believed. I remember them giving talks in my university about how to build a Linux scanner, how to configure it for your own needs. And they did it for free. During my university days, I remember having installfests in which you could just take your laptop and people would help you to install Linux. People that had a true belief in open source software and were willing to help you for free.
Swapnil Bhartiya:What role has open source played in, as we were talking earlier, that you don’t have to prove yourself, you don’t have to be in a specific region to get involved? So, talk about what role open source has played in creating a level playing field in giving access to underrepresented minorities and give them not only tools but also a voice.
Ricardo Neri: I think that, yeah, probably it’s similar to what I was saying at the beginning that in the traditional model in which you have to go to college and then spend four years there and not work and have good grades. You need to have certain opportunities in life to be able to do that, to have the luxury of attending college for five years, and gain a degree. But in software, for instance, you don’t need that. All you need is willingness. Just the willingness of learning and contributing to it. So I think that for underrepresented minority groups, statistically, they have a lesser chance of attending college and getting a degree.
I have also seen companies realizing the fact that you don’t actually need to be a computer scientist to start writing software. That has opened doors for people of different backgrounds and very diverse backgrounds in which you don’t have to be part of a certain career path or school path that can land you a job in this industry. You can just start wherever you want.
There are many efforts in the community. The GNOME Foundation has scholarships to help recruit people from underrepresented groups to start contributing and they get mentoring. Because that is an important point. The software is free and anyone can contribute to it. But if you have a mentor, if you have someone that can help you navigate an open source software community that will help you a lot and it will go a long way to get you established in that community. You can start contributing very simple patches. But over time you have that guidance, you can optimize your time and your effort to make the things that will have an impact, and will maybe someday make you a key contributor to the community.