Home Blog Page 55

Morgan Stanley, Microsoft, and Regnosys Break New Ground in RegTech with FINOS

This post originally appeared on the FINOS blog. You can also listen to the latest FINOS podcast with Minesh Patel, Chief Technology Officer at REGnosys, discussing his upcoming talk at the FINOS Open Source in Finance Forum (OSFF) on July 13th in London about “Breaking new ground in RegTech through open source TechSprint innovation”.

In the first quarter of 2022, a multi-organisation, multi-location team of developers planned, scheduled and delivered an ambitious three day “RegTech” collaboration challenge.

The event, dubbed a “TechSprint”, looked to demonstrate how financial institutions could comply with trade reporting rules for the upcoming US CFTC requirements using entirely open-source components.

Why It’s Important

Every year, the financial industry spends billions trying to comply with often complex data reporting requirements. For every reporting regime and jurisdiction, firms must typically sift through hundreds of pages of legal text, which they must then manually interpret and code in their IT systems.

As a result, while many financial institutions share the same reporting obligations, they usually implement their logic in slightly different ways due to fragmented technology approaches, adding to risks and costs.

The field is ripe for a shake-up by “RegTech”, i.e. the application of technology to address regulatory challenges. In particular, the ability to build and store the reporting logic in an open-source and technology-agnostic way, and to run it based on open-source components too, could reap huge efficiency benefits for the industry.

Current Landscape

This RegTech space is one that FINOS has been actively investing in. In 2020, FINOS approved the contribution of the Regulation Innovation SIG, a Special Interest Group dedicated to the applications of open source to regulatory problems. Morphir, an open-source project contributed by Morgan Stanley, is positioned as a key component of that Reg SIG. Morphir allows to represent, store, share and process business logic in an implementation-agnostic way, including the types of rules and calculations often found in regulations.

The industry is also getting better organised to tackle pressing regulatory challenges more collaboratively. Under the auspices of the industry’s existing trade associations, the Digital Regulatory Reporting (DRR) programme is a mutualized, industry-wide initiative addressing the global trade reporting requirements. Those reporting regimes are being updated across the G20 and DRR starts with the US CFTC revised swap data reporting rules that go live this year. DRR involves industry participants working together to deliver an open-source, machine-executable expression of the reporting rules.

These two initiatives, Morphir and DRR, looked like a perfect match. A like-minded team of developers sitting across organisations decided to undertake the challenge of integrating them, thus demonstrating that reporting rules can be developed, executed and validated using entirely open-source components – all under three days!

Approach

Technical

In DRR, the rule logic is expressed in a Domain-Specific Language called the Rosetta DSL and then translated into executable code through an automated “code generation” process. The reporting rules’ inputs are modelled according to the Common Domain Model (CDM), an initiative initially championed by the International Swaps and Derivatives Association (ISDA), now joined by other trade associations, and involving many industry participants including buy- and sell-side firms.

The Rosetta DSL and its associated code generators, currently being proposed for contribution to FINOS, are open-source projects developed by technology firm REGnosys, which provides the software platform for the DRR and CDM programme.

The main objective of the TechSprint was to develop a Rosetta-to-Morphir code generator. This would demonstrate that Morphir can be used as a target for storing and executing the body of rules in DRR and that it produces results that are consistent with Rosetta. In addition, the TechSprint looked to provide a formal verification mechanism for the DRR code using Bosque, another open-source project developed by Microsoft that is already integrated with Morphir.

Scope

The first trade reporting regime available in DRR is the CFTC Rewrite, which is rolling out in the US this year. The TechSprint focused on handling a couple of CFTC reportable fields to demonstrate the Rosetta-Morphir-Bosque integration.

Logistics

Building on our proven approach seen over the last two years with the Legend pilot and the Legend hosted instance, the event was run as a “task-force” where teams sitting across organisations’ boundaries collaborated and shared knowledge on their respective open-source projects, all under FINOS’s sponsorship.

In total, seven representatives from three teams at Morgan Stanley, Microsoft and REGnosys have worked together for three days across three separate locations in the UK, Ghana and the US.

Given the time zone differences, the TechSprint was held virtually, starting with the UK/Ghana shift and closing with the NY shift. The teams were mostly self-organised, with regular checkpoints throughout the day.

Substantial Results at Record Speed

In just three days, a Rosetta-to-Morphir code generator has been developed successfully. Whilst not complete, it has been shown to handle increasingly complex logic from Rosetta. REGnosys is integrating this deliverable back into Rosetta’s main open-source code-base.

A couple of in-scope reportable fields were successfully tested by running the Morphir-Scala engine on a sample trade population and displayed in a UI, matching their expected results in Rosetta. The Morphir UI showed how the reporting logic stored in Morphir could be represented graphically.

Finally, the Bosque validation layer was successfully applied to the code generated from Rosetta, opening the way to a formal verification method for the rules developed in DRR.

Take-Aways and Next Steps

One of the most interesting take-aways from this TechSprint event was its task-force format, which allowed the teams to perform at their level best. This format could serve as a template for future “open innovation” initiatives engaging the FINOS community.

The key ingredients of success were:

A specific and tangible deliverable
Collaboration, not competition, on that shared objective
Diversity of participants, all goal-oriented
Clear responsibilities of the different team members
Careful preparation and planning
A “safe space” to contribute in open-source

As a next step, the TechSprint team will be demonstrating the result of their work at the upcoming Open Source in Finance Forum in London (July 13th). Those results will be encapsulated into a video that will be made publicly available.

The Morphir-to-Rosetta code generator delivered during the TechSprint is also included in a formal open-source contribution to FINOS. This will create a first bridge between the on-going DRR industry programme and the wider FINOS community, allowing to connect it to similar initiatives taking place under the Reg SIG.

Given interest and community engagement in that group, further open innovation events involving multiple firms could be run along a similar format.

The potential benefits of open collaboration in the regulatory space are massive. This TechSprint demonstrates how new ground can be broken when barriers tumble down.

Authors:

Leo Labeis, Founder and CEO at REGnosys
Stephen Goldbaum, Executive Director at Morgan Stanley
Mark Marron, Principal Research Software Development Engineer at Microsoft

The post Morgan Stanley, Microsoft, and Regnosys Break New Ground in RegTech with FINOS appeared first on Linux Foundation.

Configuring Ansible’s container image registry: What you need to know

Consider your options for configuring and maintaining your container image registry in Ansible Automation Platform 2.

Read More at Enable Sysadmin

The Impressive Scope of the Linux Foundation in the 21st Century Digital Economy

This post was originally published on June 30, 2022 on Irving Wladawsky-Berger’s blog

Last week, the Linux Foundation held its North America Open Source Summit in Austin. The week-long summit included a large number of breakout sessions as well as several keynotes. Open Source Summit Europe will take place in Dublin in September and Open Source Summit Japan in Yokohama in December.

I’ve been closely involved with open, collaborative innovation and open source communities since the 1990s. In particular, I was asked to lead a new Linux initiative that IBM launched in January of 2000 to embrace Linux across all the company’s products and services.

At the time, Linux had already been embraced by the research, Internet, and supercomputing communities, but many in the commercial marketplace were perplexed by IBM’s decision. Over the next few years, we spent quite a bit of effort explaining to the business community why we were supporting Linux, which included a number of Linux commercials like this one with Muhammad Ali that ran in the 2006 Super Bowl. IBM also had to fight off a multi-billion dollar lawsuit for alleged intellectual property violations in its contributions to the development of Linux. Nevertheless, by the late 2000s, Linux had crossed the chasm to mainstream adoption, having been embraced by a large number of companies around the world.

In 2000, IBM, along with HP, Intel, and several other companies formed a consortium to support the continued development of Linux, and founded a new non-profit organization, the Open Source Development Labs (OSDL). In 2007, OSDL merged with the Free Standards Group (FSG) and became the Linux Foundation (LF). In 2011, the LF marked the 20th anniversary of Linux at its annual LinuxCon North America conference. I had the privilege of giving one of the keynotes at the conference in Vancouver, where I recounted my personal involvement with Linux and open source.

Over the next decade, the LF went through a major expansion. In 2017, its annual conferences were rebranded Open Source Summits to be more representative of LF’s more general open source mission beyond Linux. Then in April of 2021, the LF announced the formation of Linux Foundation Research, a new organization to better understand the opportunities to collaborate on the many open source activities that the LF was by then involved in. Hilary Carter joined the LF as VP of Research and leader of the new initiative.

A few months later, Carter created an Advisory Board to provide insights into emerging technology trends that could have a major impact on the growing number of LF open source projects, as well as to explore the role of open source to help address some of the world’s most pressing challenges. I was invited to become a member of the LF Research Advisory Board, an invitation I quickly accepted.

Having retired from IBM in 2007, I had become involved in a number of new areas, – such as cloud, blockchain, AI, and the emerging digital economy. As a result, I had not been much involved with the Linux Foundation in the 2010s, and continued to view LF as primarily overseeing the development of Linux. But, once I joined the Research Advisory Board and learned about the evolution of the LF over the previous decade, I was frankly surprised at the impressive scope of its activities. Let me summarize what I learned.

Once I joined the Research Advisory Board and learned about the evolution of the LF over the previous decade, I was frankly surprised at the impressive scope of its activities.

According to its website, the LF now has over 1,260 company members, including 14 Platinum and 19 Gold, and supports hundreds of open source projects. Some of the projects are focused on technology horizontals, others on industry verticals, and many are subprojects within a large open source project.

Technology horizontal areas include AI, ML, data & analytics; additive manufacturing; augmented & virtual reality; blockchain; cloud containers & virtualization; IoT & embedded; Linux kernel; networking & edge; open hardware; safety critical systems; security; storage; system administration; and Web & application development. Specific infrastructure projects include OpenSSF, – the Open Source Software Security Foundation; LF AI & Data, – whose mission is to build and support open source innovations in the AI & data domains ; and the Hyperledger Foundation, – which hosts a number of enterprise-grade blockchain subprojects, such as Hyperledger Cactus, – to help securely integrate different blockchains; Hyperledger Besu, – an Ethereum client for permissioned blockchains; and Hyperledger Caliper, – a blockchain benchmark tool to measure performance.

Industry vertical areas, include automotive & aviation; education & training; energy & resources; government & regulatory agencies; healthcare; manufacturing & logistics; media & entertainment; packaged goods; retail; technology; and telecommunication. Industry focused projects include LFEnergy, – aimed at the digitization of the energy sector to help reach decarbonization targets; Automotive Grade Linux, – to accelerate the development and adoption of a fully open software stack for the connected car; Chips Alliance, – to accelerate open source hardware development; Civil Infrastructure Platform, – to enable the development and use of software building blocks for civil infrastructure; LF Public Health, – to improve global health equity and innovation; and Academy Software Foundation, – which is focused on the creation of an open source ecosystem for the animation and visual effects industry and hosts a number of related subprojects such as OpenColorIO, – a color management framework; OpenCue, – a render management system; and OpenEXR, – the professional-grade image storage format of the motion picture industry.

The LF estimates that its sponsored projects have developed over one billion lines of open source code which support a significant percentage of the world’s mission critical infrastructures. These projects have created over $54 billion in economic value. A recent study by the European Commission estimated that in 2018, the economic impact of open source across all its member states was between €65 and €95 billion. To better understand the global economic impact of open source, LF Research is sponsoring a study led by Henry Chesbrough, UC Berkeley professor and fellow member of the Advisory Board.

Open source advances are totally dependent on the contributions of highly skilled professionals. The LF estimates that over 750 thousand developers from around 18 thousand contributing companies have been involved in its various projects around the world. To help train open source developers, the LF offers over 130 different courses in a variety of areas, including systems administration, cloud & containers, blockchain, and IoT & embedded development, as well as 25 certification programs.

In addition, the LF, in partnership with edX, – the open online learning organization created by Harvard and MIT – has been conducting an annual web survey of open source professionals and hiring managers to identify the latest trends in open source careers, the skills that are most in demand, what motivates open source professionals, how employers can attract and retain top talent, as well as diversity issues in the industry.

The 10th Annual Open Source Jobs Report was just published in June of 2022. The report found that there remains a shortage of qualified talent – 93% of hiring managers have difficulty finding experienced open source professionals; compensation has become a differentiating factor – 58% of managers have given salary increases to retain open source talent; certifications have hit a new level of importance – 69% of hiring managers are more likely to hire certified open source professionals; 63% of open source professionals believe open source runs most modern technology; and cloud skills are the most in demand, followed by Linux, DevOps, and security.

Finally, in her Austin keynote, Hilary Carter presented 10 quick facts about open source from LF Research:

53% of survey respondents contribute to open source because “it’s fun”;
86% of hiring managers say hiring open source talent is a priority for 2022;
2/3 of developers need more training to do their jobs;
The most widely used open source software is developed by only a handful of contributors, – 136 developers were responsible for more than 80% of the lines of code added to the top 50 packages;
45% of respondents reported that their employers heavily restrict or prohibit contributions to open source projects whether private or work related;
47% of organizations surveyed are using software bill of materials (SBOMs) today;
“You feel a sense of community and responsibility to shepherd this work and make it the best it can be;
1 in 5 professionals have been discriminated against of feel unwelcome;
People who don’t feel welcome in open source are from disproportionately underrepresented groups;
“When we have multiple people with varied backgrounds and opinions, we get better software”.

“Open source projects are here to stay, and they play a critical role in the ability for most organizations to deliver products and services to customers,” said the LF in its website. “As an organization, if you want to influence the open source projects that drive the success of your business, you need to participate. Having a solid contribution strategy and implementation plan for your organization puts you on the path towards being a good corporate open source citizen.”

The post The Impressive Scope of the Linux Foundation in the 21st Century Digital Economy appeared first on Linux Foundation.

How to set user password expirations on Linux

Use the chage command to force users to change their passwords to comply with your password-aging policies.

Read More at Enable Sysadmin

Deprecated Linux commands, Podman Compose vs. Docker Compose, and more sysadmin tips

Check out Enable Sysadmin’s top 10 articles from June 2022.

Read More at Enable Sysadmin

How to use YAML nesting, lists, and comments in Ansible playbooks

Although YAML is considered easy to understand, its syntax can be quite confusing. Use this guide to the basics.

Read More at Enable Sysadmin

How Microservices Work Together

The article originally appeared on the Linux Foundation’s Training and Certification blog. The author is Marco Fioretti. If you are interested in learning more about microservices, consider some of our free training courses including Introduction to Cloud Infrastructure TechnologiesBuilding Microservice Platforms with TARS, and WebAssembly Actors: From Cloud to Edge.

Microservices allow software developers to design highly scalable, highly fault-tolerant internet-based applications. But how do the microservices of a platform actually communicate? How do they coordinate their activities or know who to work with in the first place? Here we present the main answers to these questions, and their most important features and drawbacks. Before digging into this topic, you may want to first read the earlier pieces in this series, Microservices: Definition and Main Applications, APIs in Microservices, and Introduction to Microservices Security.

Tight coupling, orchestration and choreography

When every microservice can and must talk directly with all its partner microservices, without intermediaries, we have what is called tight coupling. The result can be very efficient, but makes all microservices more complex, and harder to change or scale. Besides, if one of the microservices breaks, everything breaks.

The first way to overcome these drawbacks of tight coupling is to have one central controller of all, or at least some of the microservices of a platform, that makes them work synchronously, just like the conductor of an orchestra. In this orchestration – also called request/response pattern – it is the conductor that issues requests, receives their answers and then decides what to do next; that is whether to send further requests to other microservices, or pass the results of that work to external users or client applications.

The complementary approach of orchestration is the decentralized architecture called choreography. This consists of multiple microservices that work independently, each with its own responsibilities, but like dancers in the same ballet. In choreography, coordination happens without central supervision, via messages flowing among several microservices according to common, predefined rules.

That exchange of messages, as well as the discovery of which microservices are available and how to talk with them, happen via event buses. These are software components with well defined APIs to subscribe and unsubscribe to events and to publish events. These event buses can be implemented in several ways, to exchange messages using standards such as XML, SOAP or Web Services Description Language (WSDL).

When a microservice emits a message on a bus, all the microservices who subscribed to listen on the corresponding event bus see it, and know if and how to answer it asynchronously, each by its own, in no particular order. In this event-driven architecture, all a developer must code into a microservice to make it interact with the rest of the platform is the subscription commands for the event buses on which it should generate events, or wait for them.

Orchestration or Choreography? It depends

The two most popular coordination choices for microservices are choreography and orchestration, whose fundamental difference is in where they place control: one distributes it among peer microservices that communicate asynchronously, the other into one central conductor, who keeps everybody else always in line.

Which is better depends upon the characteristics, needs and patterns of real-world use of each platform, with maybe just two rules that apply in all cases. The first is that actual tight coupling should be almost always avoided, because it goes against the very idea of microservices. Loose coupling with asynchronous communication is a far better match with the fundamental advantages of microservices, that is independent deployment and maximum scalability. The real world, however, is a bit more complex, so let’s spend a few more words on the pros and cons of each approach.

As far as orchestration is concerned, its main disadvantage may be that centralized control often is, if not a synonym, at least a shortcut to a single point of failure. A much more frequent disadvantage of orchestration is that, since microservices and a conductor may be on different servers or clouds, only connected through the public Internet, performance may suffer, more or less unpredictably, unless connectivity is really excellent. At another level, with orchestration virtually any addition of microservices or change to their workflows may require changes to many parts of the platform, not just the conductor. The same applies to failures: when an orchestrated microservice fails, there will generally be cascading effects: such as other microservices waiting to receive orders, only because the conductor is temporarily stuck waiting for answers from the failed one. On the plus side, exactly because the “chain of command” and communication are well defined and not really flexible, it will be relatively easy to find out what broke and where. For the very same reason, orchestration facilitates independent testing of distinct functions. Consequently, orchestration may be the way to go whenever the communication flows inside a microservice-based platform are well defined, and relatively stable.

In many other cases, choreography may provide the best balance between independence of individual microservices, overall efficiency and simplicity of development.

With choreography, a service must only emit events, that is communications that something happened (e.g., a log-in request was received), and all its downstream microservices must only react to it, autonomously. Therefore, changing a microservice will have no impacts on the ones upstream. Even adding or removing microservices is simpler than it would be with orchestration. The flip side of this coin is that, at least if one goes for it without taking precautions, it creates more chances for things to go wrong, in more places, and in ways that are harder to predict, test or debug. Throwing messages into the Internet counting on everything to be fine, but without any way to know if all their recipients got them, and were all able to react in the right way can make life very hard for system integrators.

Conclusion

Certain workflows are by their own nature highly synchronous and predictable. Others aren’t. This means that many real-world microservice platforms could and probably should mix both approaches to obtain the best combination of performance and resistance to faults or peak loads. This is because temporary peak loads – that may  be best handled with choreography – may happen only in certain parts of a platform, and the faults with the most serious consequences, for which tighter orchestration could be safer, only in others (e.g. purchases of single products by end customers, vs orders to buy the same products in bulk, to restock the warehouse) . For system architects, maybe the worst that happens could be to design an architecture that is either orchestration or choreography, but without being really conscious (maybe because they are just porting to microservices a pre-existing, monolithic platform) of which one it is, thus getting nasty surprises when something goes wrong, or new requirements turn out to be much harder than expected to design or test. Which leads to the second of the two general rules mentioned above: don’t even start to choose between orchestration or choreography for your microservices, before having the best possible estimate of what their real world loads and communication needs will be.

The post How Microservices Work Together appeared first on Linux Foundation.

Manage your RPG players with pc

Keep track of your role-playing games’ character data with the pc (player character) command.

Read More at Enable Sysadmin

Ag-Rec: Improving Agriculture Around the World with Open Source Innovation

One of the first projects I noticed after starting at the Linux Foundation was AgStack. It caught my attention because I have a natural inclination towards farming and ranching, although, in reality, I really just want a reason to own and use a John Deere tractor (or more than one). The reality is the closest I will ever get to being a farmer is my backyard garden with, perhaps, some chickens one day. But I did work in agriculture policy for a number of years, including some time at USDA’s Natural Resources Conservation Service. So, AgStack piqued my interest. Most people don’t really understand where their food comes from, the challenges that exist across the globe, and the innovation that is still possible in agriculture. It is encouraging to see the passion and innovation coming from the folks at AgStack.

Speaking of that, I want to dig into (pun intended) one of AgStacks’ projects, Ag-Rec.

Backing up a bit, in the United States, the U.S. Department of Agriculture operates a vast network of cooperative extension offices to help farmers, ranchers, and even gardeners improve their practices. They have proven themselves to be invaluable resources and are credited with improving agriculture practices both here in the U.S. and around the globe through research, information sharing, and partnerships. Even if you aren’t a farmer, they can help you with your garden, lawn, and more. Give them a call – almost every county has an office.

The reality with extension education is that it is still heavily reliant on individuals going to offices and reading printed materials or PDFs. It could use an upgrade to help the data be more easily digestible, to make it quicker to update, to expand the information available, and to facilitate information sharing around the world. Enter Ag-Rec. 

I listened to Brandy Byrd and Gaurav Ramakrishna, both with IBM, present about Ag-Rec at the Open Source Summit 2022 in Austin, Texas. 

Brandy is a native of rural South Carolina, raised in an area where everyone farmed. She recalled some words of wisdom her granddaddy always said, “Never sell the goose that laid the golden egg.” He was referring to the value of the farmland – it was their livelihood. She grew up seeing firsthand the value of farms, and she was already familiar with the value of the information from the extension service and of information sharing among farmers and ranchers beyond mornings at the local coffee shop. But she also sees a better way. 

The vision of Ag-Rec is a framework where rural farmers from small SC towns to anywhere in the world have the same cooperative extension framework where they can get info, advice, and community. They don’t have to go to an office or have a physical manual. They can access a wealth of information and that can be shared anywhere, anytime. 

On top of that, by making it open source, anyone can use the framework so anyone can build applications and make the data available in new and useful ways. Ag-Rec is providing the base for even more innovation. Imagine the innovation we don’t know is possible. 

The Roadmap

Brandy and Gaurav shared about how Ag-Rec is being built and how developers, UI experts, agriculture practices experts, end users, and others can help contribute. When the recording of the presentation is available we will share that here. You can also go over to Ag-Rec’s GitHub for more information and to help. 

Here is the current roadmap: 

Immediate

Design and development of UI with Mojoe.net
Plant data validation and enhancements
Gather requirements to provision additional Extensive Service recommendation data
Integrate User Registry for authentication and authorization

Mid-term

Testing and feedback from stakeholders
Deploy the solution on AgStack Cloud
Add documentation for external contribution and self-deployment

Long-term

Invite other Extension Services and communities
Iterate and continuous improvement

I, for one, am excited about the possibility of this program to help improve crop production, agricultural-land conservation, pest management, and more around the world. Farms feed the world, fuel economies, and so much more. With even better practices, their positive impact can be even greater while helping conserve the earth’s resources. 

The Partners

In May 2021, the Linux Foundation launched the AgStack Foundation to “build and sustain the global data infrastructure for food and agriculture to help scale digital transformation and address climate change, rural engagement, and food and water security.”  Not long after, IBM, Call for Code and Clemson University Cooperative Extension “sought to digitize data that’s been collected over the years, making it accessible to anyone on their phone or computer to search data and find answers they need.” AgStack “way to collaborate with and gain insights from a community of people working on similar ideas, and this helped the team make progress quickly.” And Ag-Rec was born. 

A special thank you to the core team cultivating (pun intended) this innovation: 

Brandy Byrd, IBM

Gaurav Ramakrishna, IBM

Sumer Johal, AgStack

Kendall Kirk, Clemson University

Mallory Douglass, Clemson University

Mojoe.net

Resources

Call for Code and AgStack open-source Ag Recommendations

Ag-Rec GitHub

<!–Ag-Rec Public Google Drive–>

AgStack Foundation

AgStack Slack

<!–AgStack Bi-Weekly Public Zoom Call–>

Presentation at Open Source Summit North America 2022 (YouTube link available soon)

The post Ag-Rec: Improving Agriculture Around the World with Open Source Innovation appeared first on Linux Foundation.

Linux superuser access, explained

Here’s how to configure Linux superuser access so that it’s available to those who need it—yet well out of the way of people who don’t need it.

Read More at Enable Sysadmin