News

Jlich Supercomputing Centre Deploys Cray and IBM Systems for Human Brain Project

None
Sept. 30, 2016

By: Michael Feldman

The Jülich Supercomputing Centre has installed a couple of HPC systems to support neuroscience applications as part of special EU-funded procurement for the European Human Brain Project (HBP). The two machines, JURON, from IBM, and JULIA, from Cray, are pilot systems that will be used evaluate technologies and architecture for a much larger HBP supercomputer down the line.

A big part of that effort will revolve on using a neural simulation tool known as NEST to determine how well these two systems are able to model the brain. New methods for data and image analysis are also being developed to exercise the machines. One of these has to do with being able to interactively steer the brain simulations and visualizations during execution. That kind of control demands greater performance and functionality than that of batch processing, which is the conventional method of running applications on supercomputers.

 

JULIA, JUST, and JURON. Source: Forschungszentrum Jülich

 

JURON, the IBM system, is a two-rack cluster based on a heterogeneous architecture that includes Power8 CPUs and NVIDIA’s P100 Pascal GPUs, with NVLink as the CPU-GPU and GPU-GPU interconnect. Thanks to the on-board GPUs, each server can do double duty as a visualization node. Other features include NVRAM for extra node-level storage and Mellanox InfiniBand EDR (100 Gbps) for the system interconnect. The system is hooked up to an IBM GPFS storage cluster known as JUST.

Cray’s JULIA system is also two racks, but in this case powered by Intel technology, specifically, Knights Landing Xeon Phi processors and the Omni-Path 100 Gbps network. Cray’s DataWarp caching technology has been included to speed I/O. Like JURON, JULIA also comes with NVRAM for additional local storage, and it too is connected to the same GPFS cluster as its IBM counterpart.

The architecture determined to be the best-suited for this particular neuroscience application will be the basis for a future purchase of a 50-petaflop supercomputer with 20 petabytes of memory. That procurement process is already underway and is scheduled to be concluded in 2017.

Given that timeframe, it’s probably no coincidence that the pilot systems represent the two architectures and corresponding vendor alliances that have already been tapped by the US Department of Energy (DOE) to deliver pre-exascale supercomputers by 2017-2018. Those DOE machines will exceed 50 petaflops, so those solutions would certainly be powerful enough for what the HBP has in mind.

That 50-petaflop system is not the end of the line however. The ultimate goal of the project is to perform cellular-level simulations of the 100 billion neurons of an entire human brain. For that, the HBP researchers are counting on having an exaflop supercomputer with 100 petabytes of memory before the 10-year project concludes in September 2023. Given that the US DOE is planning to field their first exascale systems in that same year, again, presumably based on those two architectures, there’s a rather small window of opportunity for the HBP participants to reach their objective. So for them, the race to exascale has a special urgency.