Home Blog Page 251

Splunk To Acquire SaaS Startup Omnition

Splunk is acquiring Omnition, a stealth-mode SaaS startup for distributed tracing, improving monitoring across microservices applications. “Adding Omnition to our IT and Developer portfolio will help customers gain insights across the entire enterprise application portfolio from on-premises data centers to cloud based applications and infrastructure,” said Tim Tully, Chief Technology Officer, Splunk. (Source: Splunk, ZDNet)

Google Drops Source Code Of Android 10

Google has dropped the source code of Android 10, which was released yesterday. Those who want to build their OS on top of Android 10, can start fiddling with the source code. Unlike SUSE Linux Enterprise or Ubuntu, Android is a relatively different kind of open source. The source code of the latest version is made available after the commercial version is available in the market. (Source: Android Open Source Project)

openSUSE Is A Community Of Communities: Gerald Pfeifer

Gerald Pfeifer, a seasoned open source developer and CTO of SUSE EMEA, has been appointed the new chair of the openSUSE board. We talked to Pfeifer to better understand the role of the openSUSE board, the relationship between the company and the community, and the status of the openSUSE Foundation.

Swapnil Bhartiya: How would you define openSUSE? A distribution, or a community that creates and manages many projects, including distributions like Leap and Tumbleweed?

Gerald Pfeifer: Neither, nor. (Smiles.) Actually, I quite like how you describe the second option, so given those two choices, I’ll pick that, hands down.

The somewhat cheeky “neither, nor” comes from me seeing openSUSE more as a community of communities, if you will, with their own goals rather than a single, absolutely homogeneous community. And a certain commonality within diversity (and vice versa) is one of the strengths of openSUSE. (Similar to the “open source community,” which I have been arguing for a decade really should read “communities.”)

Swapnil Bhartiya: How independent is the openSUSE community?

Gerald Pfeifer: openSUSE is quite independent when it comes to technical questions and many aspects of how to go about things. Where it comes to elements like infrastructure or budget, there is more direct dependency on SUSE, and increasing transparency and influence in those areas is one of the directions I’d like to see this relationship evolve.

Swapnil Bhartiya: The computing landscape is changing, and focus is shifting to emerging technologies like AI/ML, AR/VR, and so on. Is openSUSE looking at those opportunities to build platforms that empower these use cases and workloads?

Gerald Pfeifer: Yes, in that individual groups and developers—some SUSE employees, some not—are looking into those areas and use openSUSE as a rich base for their work. For example, did you know that Kubic is a certified Kubernetes distribution?

No, in that the program management office for openSUSE (which does not exist to begin with—see above) has not identified these as focus areas and is now assigning volunteers, which is not how things work.

But, yes, wearing my hat as a CTO at SUSE, colleagues—and me—are of course looking into our crystal balls, engaging in new technology directions, and working with our distinguished engineers, product management, and engineering teams to pursue those [opportunities]. And what better incubation bed could you imagine than a vivid environment with the rich infrastructure that openSUSE is and has?

I clearly see openSUSE expanding efforts to ensure it remains relevant as a leading platform for emerging use cases, such as AI/ML or edge.

(Spoiler alert: One area I personally will engage in more is machine learning—back to the roots, if you will, having done my Ph.D. around AI.)

Swapnil Bhartiya: What’s the role of openSUSE board chairman?

Gerald Pfeifer: If you look at the openSUSE guiding principles, you won’t find a lot about the role beyond it being a board member that is appointed by SUSE. So part of serving in that the role is finding your own interpretation—your way of living it and contributing.

In addition to acting as a board member like my peers do, there is one aspect I see as my personal focus: bridging—helping to further connect and bridge between openSUSE and “SUSE corporate.” There are a lot of such bridges on the technical side, and SUSE employees who contribute to openSUSE, and personal and working relationships from openSUSE users and contributors towards the SUSE side, which is great. I hope we can grow those and add strong connections between some of my peers on the SUSE side and the board, them and contributors in specific areas, and generally further increase mutual visibility, understanding, and collaboration.

Before accepting this assignment, I had multiple very productive and insightful conversations with Richard [Brown, the outgoing chair], who has done a very fine transition, Thomas Di Giacomo [SUSE president of engineering, product, and innovation], and a few others, which help me understand the current setting, and I’ll keep listening and learning.

What this role is not, to be very clear, is something like a program management office for openSUSE, let alone the CEO of SUSE. And the role of the board, and its chairperson, are different from the role in a commercial entity such as SUSE, not the least since we are largely dealing with volunteers.

Swapnil Bhartiya: As the new chair of the board, do you have some fresh vision for the community?

Gerald Pfeifer: I am not a big fan of me (or anyone else) parachuting in and declaring a new vision. That said, having used, contributed to, and supported openSUSE for many years, even if in the background in many cases, and having been with SUSE for a while, and having had good conversations with many around openSUSE and SUSE, I made some observations that are guiding my original priorities.

In the shorter term I’d like to establish closer connections between colleagues at SUSE responsible for infrastructure, budget, and the like with the board and other openSUSE members, and share, plan, and—where possible—do together.

And I’d like for us to be able to articulate better all the contributions openSUSE provides to SUSE—in terms of Tumbleweed being the evolution on the Linux side, in terms of feedback, in terms of communities, and in terms of new projects and initiatives.

And personally, I plan on using my attendance at the openSUSE.Asia Summit in October to check in with community members from other areas to understand their needs better and, for example, validate my personal experience that high-speed internet is not omnipresent, and some of the approaches that work like a charm for those with cable, DSL, or 4G in Germany or the US simply do not for all our users and contributors.

There surely will be many more things coming up as I have the opportunity to engage more widely and deeply. An underlying theme I see is to help bring groups, communities, openSUSE, and SUSE closer together.

Swapnil Bhartiya: What are your thoughts on the proposed foundation? Why do we need a foundation for openSUSE? What is the purpose and goal of the foundation?

Gerald Pfeifer: I’d argue there is not a strict need for a foundation for openSUSE, though I have heard and understood arguments in favor of one. The one I’ve seen the most is to make it easier for other companies to sponsor hardware, budget, or otherwise.

As with most things, there are pros and cons, and I have not sufficiently dived into the matter to be able to do it fair justice. I absolutely do expect this to be one of the primary areas we will be working on as a board, a community, and a business in the coming months.

Google Launches TensorFlow Machine Learning Framework For Graphical Data

Google today introduced Neural Structured Learning (NSL), an open source framework that uses the Neural Graph Learning method for training neural networks with graphs and structured data. The new framework also includes tools to help developers structure data and APIs for the creation of adversarial training examples with little code. (VentureBeat)

Visual Studio Code Now Supports SQL Server 2019 Data Clusters PySpark

Microsoft is adding support for SQL Server 2019 Big Data Clusters PySpark development and query submission in Visual Studio Code. “It provides complementary capabilities to Azure Data Studio for data engineers to author and productionize PySpark jobs after data scientist’s data explore and experimentation. The Visual Studio Code Apache Spark and Hive extension enables you to enjoy cross platform and enhanced light weight Python editing capabilities. It covers scenarios around Python authoring, debugging, Jupyter Notebook integration, and notebook like interactive query,” said Jenny Jiang, Principal Program Manager, R&D Data Analytics at Microsoft. Visual Studio Code is an open source project which is also available for Linux.

Linux Mint 19.3 Slated for Release on Christmas with HiDPI Improvements

With the Linux Mint 19.2 “Tina” operating system hitting the streets last month, the Linux Mint project has kicked off the development of the next release, Linux Mint 19.3 (codename is yet to be revealed), which is expected to arrive this Christmas with more improvements and updated components. (Softpedia)

Bringing an end to hypervisor vs bare metal debate

The debate whether hypervisors are faster than bare metal resurfaced at the vmworld 2019 conference. VMware has long maintained that hypervisors have many advantages over Bare Metal, including efficiency and cost.

“Hypervisors hosting multiple virtual machines do offer some advantages over bare metal servers. Hypervisors allow virtual machines to be created instantly, providing more resources as needed for dynamic workloads. It is much harder to provide an additional physical server when it is needed. Hypervisors also allow for more utilization of a physical server, since it is able to run several virtual machines on one physical machine’s resources. Running several virtual machines on one physical machine is more cost and energy-efficient than running multiple underutilized physical machines for the same task.” 

Are these claims true?

Rob Hirschfeld, CEO and co-founder of RackN agrees with VMware’s claims and said that the matter of the fact is that the machines people buy today to run in datacenters are hypervisor optimized. As a result, these machines are more efficient with a hypervisor than running a single operating system.

“The CPUs are designed to run multiple VMs. Taking into account the resource constraints, the operating systems are not designed to run gigantic machines,” said Hirschfeld.

The way technologies have evolved over the years stopped assuming one giant server running just one operating system. “It’s antithetical to the way we’ve bought machines for the last 10 years because of virtualization.”

However, not everyone is buying giant servers and not everyone needs virtualization. There are many use-cases where users need smaller machines with fewer processors and moderate memory. These machines are more efficient with a single operating system than a hypervisor. Edge environment is a perfect example of such cost-effective commodity servers.

From that perspective, depending on how you set-up your infrastructure, bare metal is going to be a better performer. It might also be more cost-effective and simpler to manage. “There might be many other benefits too; it’s not an A or B question,” said Hirschfeld.

That’s not going to stop people from going the hypervisor route with traditional infrastructure optimized for VMs. Hirschfeld’s advice to such users is to not buy a terabyte of RAM and as many CPUs as they can get with fiber channel SANs and stuff like that. Instead, find small cheap machines and buy a lot of them. Datacenter design is always a balancing act of how you want to manage your infrastructure and what you want that infrastructure to do.

Hirschfeld believes that we are getting out of the era of general-purpose computing:  this is the hyper-converged concept where users buy one type of machine which will solve all of their problems. “That’s an expensive way to solve the problem as it also assumes that you’re going to virtualize everything,” he said.

What’s wrong with virtualizing everything? “It’s very hard to install VMware. It takes a lot of knowledge and fiddling to get it right,” he said.

In conclusion

The world is moving towards containers. A lot of containers. Users can run Linux containers on bare metal infrastructure with great ease and efficiency. Hirschfeld clearly sees bare metal as an ideal route for edge workloads. However, he also sees the need for virtualization. It’s a balancing act. The debate between virtualization vs bare metal sounds like a religious crusade, which it’s not. It’s all about using the right tool for the right job.

“People should not think that they should not use VMs or that they should not virtualize things,” Hirschfeld said, “In our experience, there are good reasons for a lot of pieces.  Sometimes even a suboptimal solution, if it feels right to you, is a good solution.”

Kali Linux Ethical Hacking OS Switches to Linux 5.2

Offensive Security announced today the release and general availability of the Kali Linux 2019.03 operating system, a major update to the Kali Linux 2019 series that adds lots of new features, improvements, and updated hacking tools. It also brings better support for ARM architectures, a few helper scripts that makes finding information about packages more easily and automatically runs Windows binaries with Wine, or make it easier to discover what resources can be transferred over to a Windows system. (Softpedia)

Rust is the future of systems programming, C is the new Assembly (Packt)

Josh Triplett (Principal Engineer at Intel) talked with Greg Kroah-Hartman (Linux kernel maintainer for the -stable branch) about Rust. According to posts on LWN.net, they are willing to investigate a framework for the Linux kernel to load drivers that are written in Rust. For now, Rust must not be required to build Linux, but they are willing to accept a (for now) optional component to handle Rust. While this does not fundamentally change how the Linux kernel is designed, it should allow more developers to write drivers that are more stable. (LWN, PCPER)

Microsoft Releases Open Source AI Conversation Modeling Toolkit, Icecaps

Microsoft Research has unveiled Icecaps, a new open source solution for neural conversational networks. The toolkit leverages multitask learning to improve conversation AI systems, such as giving them multiple personas. “[It’s] a new open-source toolkit that not only allows researchers and developers to imbue their chatbots with different personas, but also to incorporate other natural language processing features that emphasize conversation modeling,” said a Microsoft blog. (Source: Winbuzzer, Microsoft)