News

HPE Unveils New Prototype of Memory-Driven Computer

None
May 16, 2017

By: Michael Feldman

Hewlett Packard Enterprise has introduced what looks to be the final prototype of “The Machine,” an HPE research project aimed at developing a memory-driven computing architecture for the era of big data. According to HPE CTO Mark Potter: “The architecture we have unveiled can be applied to every computing category—from intelligent edge devices to supercomputers.”

 

Source: Hewlett Packard Enterprise

 

This latest prototype adds a number of key elements not seen in the previous version unveiled in November 2016.  For starters, the new prototype houses 160 TB of shared memory spread across 40 nodes, which, according to HPE, makes it “the largest single-memory computing system on the planet.” That’s enough memory to hold a dataset of 3D images illustrating the network of connections between cortical neurons in the brain, or more mundanely, for storing the contents of the Library of Congress five times.

Also, whereas the original prototype was outfitted with undefined processors, this new one is equipped with Cavium’s next-generation ThunderX2 SoCs, which are based on the 64-bit ARMv8 platform. Apparently, these are 32-core chips, although we know Cavium talked about 54-core versions when it introduced the ThunderX2 back in 2016.

It’s the memory, though, that is at the center of this architecture. And on this count, we learned that the prototype is using HPE Persistent Memory, in this case, 128GB DIMMs using battery-backed conventional DRAM*. The company currently offers similar 8GB NVDIMMs as options on their Proliant DL360 and DL380 servers.

As you remember, HPE’s original plan was to use memristors as the basis of The Machine’s memory. Alas, development of that technology has lagged, so the company turned to less exotic hardware. The HPE Persistent Memory used in the prototype is an interim step to a more advanced byte-addressable non-volatile memory, which may or may not turn out to be memristors after all the dust has settled.

Currently, HPE is co-developing a type of resistive ram (ReRAM) with SanDisk, which may end up being the memory technology of choice for this architecture if it ever becomes productized. A report in Scientific American says HPE plans to move to phase-change random access memory (PRAM) and memristors over the next few years. In any case, for this data-centric architecture to be commercially viable, HPE will be compelled to move to a less expensive media than either DRAM or NAND.

But the real bit of wizardry to all this is making the memory spread across all the individual nodes behave as a single store. That’s accomplished with a memory fabric switch that rides on HPE’s X1 photonic interconnect. Using light instead of electrons enable this interconnect to shuttle data between the nodes at high speeds – up to 1.2 terabytes per second – over optical fibers. And it does this with much less energy than with electronic communications.

Thanks to that level of interconnect performance, HPE thinks it can “easily scale to an exabyte-scale single-memory system and, beyond that, to a nearly-limitless pool of memory—4,096 yottabytes.” An exabyte of memory could contain the health or financial records for an entire country’s population, to give just a couple of relevant examples. Unfortunately, an exabyte of DRAM or NAND is going to cost billions or hundreds of millions of dollars, respectively. So, unless HPE can develop a memory technology that reduces those costs substantially, systems at this scale won’t be practical.

Nevertheless, HPE has engineered a system that turns conventional architectures upside down. In particular, the memory-to-compute ratio, makes this prototype a radical departure from traditional systems. For example, a typical mid-range dual-socket server, with say 200 gigaflops of performance, will have perhaps 64 GB of memory. That means its bytes/flops ratio is 0.32 – not as good as the nominal 1.0 ratio that computer scientists like to talk about, but not too shabby. For high performance computing, the situation is worse. Even a supercomputer well-endowed with memory, like the K Computer has a bytes/flops ratio of 0.12, while the top supercomputer in the world, TaihuLight, has a rather pitiful 0.014 ratio. Assuming a floating-point performance for the Cavium ThunderX2 of around 100 gigaflops, the HPE prototype would have a bytes/flops ratio of 40.0.

That’s an outstanding advantage when you’re talking about the kind of analytics and other data-centric workloads that can take advantage of in-memory computing. But for HPC applications that are both compute- and data-intensive, such as scientific simulations or deep learning, more powerful processors will be needed to make this architecture work. And that is going to incur extra costs.

For what it’s worth, HPE is not the only player developing more data-centric computing platforms. Intel, IBM, and a host of other vendors are working on rebalancing the compute-memory dynamic. Some of the component hardware technologies include 3D XPoint (Intel and Micron), 3D memory (Samsung, Hynix, and Micron), and silicon photonics (Intel and IBM), to name a few. As a consequence, by the time HPE brings its memory-driven computer to market, it should have some company.

[* The original version of this article had the prototype using 8GB NVDIMMs with DRAM and NAND.  Further investigation revealed the system is actually using battery-backed 128GB DIMMs employing regular DRAM.}