By Timothy Prickett Morgan • Get more from this author
If it wasn’t immediately obvious to you, Intel thinks the future of the systems business is weaving interconnection fabrics onto server processors – thus consolidating yet another component of the data center onto the processor and bringing to bear Chipzilla’s wafer etching process advantages on that unified chip. And, if Intel plays its cards right, giving it a sustainable advantage to keep arch-nemesis Advanced Micro Devices and up-and-coming rivals in the ARM collective.
We used to think of a server as a computer, but now the data center has become the computer,” Raj Hazra, general manager of technical computing at Intel, told El Reg. There is a difference between networks and fabrics, and while there is a place for networks, they lack certain optimizations that fabrics have. Some applications need purpose-built interconnects, and fabrics look at compute and storage nodes as partitioned logical resources rather than as separate units of compute and storage. Problems are becoming superscalar across multiple machines, and that is driving new approaches of adding bandwidth and reducing latencies in that bandwidth. The fabric interconnect has become what was the system bus or processor interface.”
The problem, of course, is that many applications are so big that they cannot be solved in a shared memory system that gangs up multiple processors together in an SMP or NUMA cluster. SMP and NUMA systems pretty much run out of gas after 32 sockets, and there is not much more you can do about it beyond cramming more cores into a socket. Shared memory systems make programming easier because coders don’t have to deal with parallelism themselves – it is done by the processor, the chipset, and the memory controllers that make a moderately parallel machine look more monolithic.