How will Linux be leveraged in next-gen supercomputers?

38

Author: Jay Lyman

Linux is at the top of the world’s Top 500 List of World’s Fastest Supercomputers, but does it have what it takes to stay there?

IBM and the United States stole the top slot last November when the BlueGene/L system running Linux topped 70 teraflops (trillions of calculations per second) on the Linpack Benchmark and displaced Japan’s Earth Simulator as king of ultra high performance computing (HPC). The Earth Simulator runs a customized flavor of Unix, and is capable of 35.86 teraflops. Since taking the lead, IBM has cemented its position by nearly doubling the performance of BlueGene/L, which kept the top spot on the June 2005 Top 500 list with a Linpack performance of 136.8 teraflops.

Big Blue rode Linux into the second spot on the most recent list with its smaller version of BlueGene, while SGI pushed the Earth Simulator down to fourth with its third-place system built from Linux clusters.

As Japan works toward a system capable of 10 petaflops by 2010 (a petaflop is 1,000 times the computing speed of a teraflop), it may be looking to Linux clusters — which now make up more than half of the Top 500.

China is also reported to be working on its own petaflop product, and all indications are that China’s effort to join the supercomputing race will rely largely on Linux.

“Definitely, the Chinese effort and most of them are centered on clusters, which are usually blades, or servers running Linux,” said Top500 list co-founder and co-editor Erich Strohmaier.

As for Japan, Strohmaier speculated that the former supercomputing champion may continue with its Earth Simulator approach and stick to vector processors and its own version of Unix, but the nation might also switch to clustering with Linux. Strohmaier said funding and other factors of Japan’s next-generation supercomputer effort remain unclear. Nevertheless, he said the blueprint laid down in BlueGene is something the supercomputing world is likely to continue to see.

“People want an operating system that doesn’t allow intrusion, or noise,” he said. “It’s a lot of processes. We’re seeing [use of Linux] continuing,” Strohmaier added. “It’s really one of the advantages of BlueGene.”

Strohmaier added that while supercomputer efforts can glean more performance from parallel computing and multi-core chips that will help fuel the HPC fight, they will also need the operating system software and middleware to go along with it.

“We do expect more [Linux],” he said. “People like to take advantage of the Linux community.”

Strohmaier indicated that multi-core processors will be a bigger driver of performance than operating system software in the next round of faster supercomputers, but also said Linux must adapt to continue to be successful.

“It’s a matter of four or eight cores instead of megahertz,” he said. “Which means that Linux has to put more emphasis on multi-threaded performance and parallel performance. Linux has been single-threaded, traditionally. I think that, in general, has to change, which will help the community as well.”

IDC research director Addison Snell said in an interview that Linux is poised to remain popular with HPC vendors and developers, but the next-generation supercomputing effort of the US government may not rely on clusters and the open source operating system as it pushes to the petaflop.

Snell — who referred to a Japanese effort that will likely be a continuation of the older, vector processor supercomputing technology of the Earth Simulator — indicated the future American supercomputing strategy from the Defense Advanced Research Projects Agency (DARPA) has pared down its approximately $50 million per system funding plans to Cray’s Cascade, IBM’s PERCS (Productive, Easy-to-use, Reliable Computing System), and Sun Microsystem’s Hero program.

Snell said by the end of next year, the list should be further pared to one or two of the large vendors, but all of them are using new technologies.

“I don’t believe any of them are running Linux,” he said. “It’s not clustering. It’s a radical re-design.”

Snell said the companies are competing hard to win all-important government funding for such super systems — which are estimated to cost around $1 billion to get to the petaflop point.

Although the Top 500 fastest supercomputer list is a good general guide for the world’s most powerful computers, there is more to HPC than teraflops and petaflops, Snell stressed. He said as processor performance and computational speed ramp up, the challenge of using the compute power becomes greater.

“A lot of initiatives are looking at not only the petaflop, but useful applications at that level of computing, and using those applications,” he said.

In IDC’s recent study on the readiness of applications for petaflop computing, Snell reported there was again a capability, but lack of drive, for most vendors and developers. “Most were in the category of it can be done, but we’re not doing it.”

“The ISVs, whether they can afford it or not, lack the motivation of rewriting code for the upper echelon of the market,” he said. “It’s not a big market.”

Still, Linux shows its might when it comes to development, and Snell said the operating system showed continued strength in overall HPC in the last quarterly report.

“Last quarter, half [of HPC systems] is clusters, and most of the clusters by far are Linux,” he said. “So Linux is playing a growing role in HPC in general.” According to Snell, Linux had been the crux of HPC strategies from IBM and HP, and Dell was “coming around.” Snell also warned against companies developing their own flavors of HPC Linux. “You wind up back with the closed, proprietary Unix world you were moving away from.”

“It’s hard to compete with millions of developers who work for the love of it,” he said. “With Linux, the pace of development is advancing very quickly, and people like it. There’s no reason it wouldn’t continue to grow in HPC.”

Tilak Agerwala, vice president of systems at IBM research, said reliance on Linux is likely to continue through to the next generation of supercomputers.

“In general, we’re seeing a growing trend toward Linux across the board,” he said. “There’s just going to be more and more Linux visible and deployed at the high end and for commodity clusters. There will probably be some holdouts,” he added, referring mostly to RISC/Unix efforts. “But from a pure technical perspective, Linux will continue to grow.”

Agerwala indicated there will be a role for Linux in the middleware stack and tools behind the next round of supercomputers, as well. “I think a lot of evolution will happen on the Linux platform,” he said, echoing Strohmaier’s call for more multi-threaded computing with Linux.

“One of the reasons I think [Linux in supercomputing] will grow is because we don’t really have a good HPC stack internationally,” he said. “It will be built and it will be based on full functional Linux.”

Agerwala referred to the international competition, indicating China’s dedication to open source software and open systems means that the use of Linux in their supercomputing effort is highly likely. As for Japan, he said the only real clear thing is that the nation has vowed to take supercomputing all the way to 10 teraflops by 2010-2011. While the details remain unclear, Japan did manage to retain the Top 500 title for five consecutive lists and is no doubt an HPC force.

“We’ve thrown down the gauntlet,” he said. “So there’s sort of a need to take over back in the lead position [for Japan]. It’s a grand challenge. We’re always concerned because they do have the ability to pull it off.”