Use code HOLIDAY24 to Save 35% Sitewide
*Offer ends December 19, 2024
Exciting news! The Tazama project is officially a Digital Public Good having met the criteria to be accepted to the Digital Public Goods Alliance !
Tazama is a groundbreaking open source software solution for real-time fraud prevention, and offers the first-ever open source platform dedicated to enhancing fraud management in digital payments.
Historically, the financial industry has grappled with proprietary and often costly solutions that have limited access and adaptability for many, especially in developing economies. This challenge is underscored by the Global Anti-Scam Alliance, which reported that nearly $1 trillion was lost to online fraud in 2022.
Tazama represents a significant shift in how financial monitoring and compliance have been approached globally, challenging the status quo by providing a powerful, scalable, and cost-effective alternative that democratizes access to advanced financial monitoring tools that can help combat fraud.
Tazama addresses key concerns of government, civil society, end users, industry bodies, and the financial services industry, including fraud detection, AML Compliance, and the cost-effective monitoring of digital financial transactions. The solution’s architecture emphasizes data sovereignty, privacy, and transparency, aligning with the priorities of governments worldwide. Hosted by LF Charities, which will support the operation and function of the project, Tazama showcases the scalability and robustness of open source solutions, particularly in critical infrastructure like national payment switches.
We are thrilled to be counted alongside many other incredible open source projects working to achieve the United Nations Sustainable Development Goals.
For more information, visit the Digital Public Goods Alliance Registry.
Xen Project 4.19 has been officially out since July 31st, 2024, and it brings significant updates. With enhancements in performance, security, and versatility across various architectures like Arm, PPC, RISC-V, and x86, this release is an important milestone for the Xen community.
At Vates, we are heavily invested in the advancement of Xen and the RISC-V architecture. RISC-V, a rapidly emerging open-source hardware architecture, is gaining traction due to its flexibility, scalability and openness, which align perfectly with our ethos of fostering open development ecosystems. Although the upstream version of Xen for RISC-V is not yet fully operational for widespread “practical use”, we’ve made significant strides, and we’re excited to share the latest updates on this project.
AWS transfers OpenSearch to the Linux Foundation to support a vendor-neutral community for search, analytics, observability, and vector database software.
Read more at linuxfoundation.org
Researchers from TU Darmstadt, TU Dresden, Hewlett Packard Enterprise (HPE), and Intel have developed advanced applications that combine HPC simulations with AI techniques using the open-source computational fluid dynamics solver OpenFOAM and the HPE-led SmartSim AI/ML library. These applications show promise for improving the accuracy and capabilities of traditional scientific and engineering modelling with data-driven techniques. These types of techniques can lead to accelerated scientific discovery and engineering prototyping by allowing researchers to run larger or more complex simulations on modern computational resources.
Save 40% on all training and certifications.
Learn more at LF Training
Alexandra Moxin, CSO, Adaptech Group
Have you ever wondered why most software projects start off well and then, several months later, turn slow and difficult? You’ve likely fallen into the traps of design as you go, two-week sprints that never accomplish much, and more ceremony meetings than time to complete your work. Maybe you’ve divided up your product into microservices but are running into never-ending orchestration issues and costs. Regardless, your team is now weeks past a critical product release that is going to take more weeks to finish.
“We’ve all been there. But it doesn’t have to be that way.”
What if I told you that software projects, products, upgrades, migrations, etc., could be easy to implement, seamlessly run, and your teams could accomplish twice as much in the time they have now? Oh, and fewer ceremony meetings cluttering calendars, so your teams would be happier as well?
Enter Event Modeling.
Event Modeling is a seven-step methodology to describe information flow that is used by multifunctional teams to easily understand and solve complex problems. While Event Modeling techniques can be used in any domain, they bring ease, simplicity, and the rigor of systems thinking to software engineering.
Imagine an interactive canvas where your C-suite is engaged in conversation with CX, UX, and DX; everyone is working from the same page, requirements are understood, specifications are tracked and connected, and all parties have a deep understanding of what each team will be doing. This is what we always see with every team that uses Event Modeling to plan and guide their endeavors.
For a deeper understanding, see Event Modeling: What is it? by original author Adam Dymitruk, CEO and Founder of Adaptech Group (1). Event Modeling is done in seven steps, and all information is represented from the user’s perspective. Let’s go through these steps and show how to build a software blueprint.
Step 1: Brainstorming
This is done collaboratively with at least one representative of each department. One person explains the goals of the project. The participants then envision how this system would look and behave, starting from the screens or pieces of information that they can conceive of having happened. Here, we gently introduce the concept that only state-changing information should be specified.
Step 2: Formulate the Plot
The participants now create a plausible walk-through of these screens or stories made of this information (aka events). The information is arranged in a timeline, which everyone reviews, to understand if it makes a cohesive story.
Step 3: Create the StoryBoard
If the mockups or screen designs haven’t been added yet, these are now added to the top of the diagram. The team then reviews the screens so that the source and destination of each field that the user sees is recorded on the event model.
Step 4: Identify What the User Can Do
Next, we show how we enable the user to change the state of the system. This is where we identify inputs, aka commands, which are identified as blue boxes. A command links the information a user enters on a screen, applies validation and business rules, and shows how that information will be stored in the system. An example in a vacation planning system would be a user selecting a range of dates, and type of room, and then pressing “Book Now”.
Step 5: Identify What the User Sees
Returning to our goals for the event model, we now link and identify what information has accumulated in the system and reflect this as UI views (aka read-models) or tasks for automation to fulfill. These outputs are colored green and may be things such as a calendar view or availability of rooms in a vacation planner, or a shopping cart in an ecommerce system.
Step 6: Apply Conway’s Law
Now that we know how information gets into and out of our system, we then organize the events into swimlanes. We do this so that the system can exist as a set of autonomous parts that separate teams can own. This allows specialization to happen to a level that we control and allows for fixed pricing, which we’ll introduce in a future post. See Conway’s Law (2) by Mel Conway.
Step 7: Elaborate Scenarios for Testing
Each workflow step is tied to information coming into the system or information going out. Given-When-Then or Given-Then specification by example scenarios are constructed while being reviewed collaboratively by the participants. This enables user story writing, which is traditionally done by a dedicated product owner in isolation in a text format, to be done collaboratively, visually, and in a small amount of time. What is critical is that each specification, while allowing for multiple variations of possible data, is tied to exactly one command or view.
This article is an intro to Event Modeling. We’ve been using Event Modeling in its earliest forms for more than a decade. Adam has been developing an open source project for creating and running Open Spaces, with its own Event Model, and has been streaming this on Twitch, YouTube (3), LinkedIn, and X. We invite you to drop in on these streams and take part, learn, and contribute; or join our Discord channel (4) to connect with the Event Modeling community. If there is interest, we’re open to sharing more detailed posts on Linux.com in a future series.
1. eventmodeling.org/posts/what-is-event-modeling/#seven-steps
2. melconway.com/Home/Conways_Law.html
3. eventmodeling.org/resources/#live-streams
4. eventmodeling.org/resources/#discord
Alexandra is Chief Strategy Officer for Adaptech Group and is an Ambassador for the ALS Network, and Google (Women TechMakers). Her background is in business intelligence, product development, and digital transformation. She attended Stanford Open University and graduated from the University of British Columbia.
Learn more at Linux Foundation Training