March 7, 2018
By: Michael Feldman
An international team of researchers have developed an algorithm that represents a big step toward simulating an entire human brain on future exascale systems.
The new algorithm enables representations of neurons and synapses to use much smaller amounts of memory than ever before, while also speeding up the computations required to simulate them. The software has been incorporated into NEST, an open source code that has been widely adopted in the neuroscience community to model neural networks and is a core technology for European Human Brain Project. The work has been published Frontiers in Neuroinformatics, in a paper titled Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
The work involved researchers at the Jülich Research Centre, Aachen University, RIKEN, KTH Royal Institute of Technology, and the Norwegian University of Life Sciences. Markus Diesmann, Director at the Jülich Institute of Neuroscience and Medicine and one of the original NEST developers, was one of the lead authors of the study. He has been employing two of the world’s largest supercomputers – the K computer at RIKEN and JUQUEEN at the Jülich Supercomputing Centre – to run large-scale simulations of neural networks using NEST. “Since 2014, our software can simulate about one percent of the neurons in the human brain with all their connections,” noted Diesmann.
The limitation mostly has to do with memory capacity, rather than flops. According to the announcement of the new algorithm, the problem is driven by the large number of cellular structures involved and the scale of neuronal connectivity. To model an entire human brain comprised of around 100 billion neurons, each one of which has thousands of synaptic connections, would require more memory than is currently available in the most powerful systems – and unfortunately, even in the exascale systems that have been envisioned thus far.
The memory limitation is encountered at the level of the node. The way it’s currently set up in NEST is that the data representing all the neurons and connections are stored on each node. The research paper outlines the problems thusly: “The human cortex consists of about 1010 cells, each receiving about 104 connections, which leads to an estimated 1014 synapses. Representing each of the connections by two double precision numbers requires about 1.6 PB of main memory.”
Ultimately, the problem is that while the memory needed for the simulation increases linearly with the size of the neural network, memory capacity per node is growing more slowly. More to the point, the number of processor cores per node will increase considerably in exascale supercomputers, but memory per core is essentially going to stay the same.
That realization led the NEST developers to a new approach. Basically, they no longer required memory on each node to increase in concert with the size of the neural network. Instead, the neural network is pre-processed to determine which neurons are most likely to be interacting with each other, and that information is used set up the data structures accordingly. As a result, they were able to limit the amount of memory needed on a given node.
Even better, the algorithm improved the parallelism of the simulation, speeding it up dramatically. In one case, using JUQUEEN to simulate one second of biological time across 520 million neurons and 5.8 trillion synapses, computation time was reduced from 28.5 minutes to 5.2 minutes. If the new algorithm is able to scale linearly on an exaflop supercomputer of similar design, that time could be reduced to 15 seconds.
“The combination of exascale hardware and appropriate software brings investigations of fundamental aspects of brain function, like plasticity and learning unfolding over minutes of biological time within our reach,” said Diesmann.
The researchers intend to make the new algorithm freely available in a future version of NEST.