News

DOE Researchers Build Automated Neural Network Generator

None
Nov. 30, 2017

By: Michael Feldman

Using the Titan supercomputer at the Department of Energy’s Oak Ridge National Laboratory (ORNL), a research team has developed an evolutionary algorithm that they claim can generate custom neural networks that “match or exceed the performance of handcrafted artificial intelligence systems.”

The impetus behind the work was to speed up the development of neural networks for various scientific applications, which are usually very different than those built for more typical deep learning applications like image identification, speech recognition, or game playing. That’s because the scientific data itself, upon which these neural networks are based, tends to be very distinctive.

According to the press release, the algorithm the researchers developed, known as MENNDL (Multinode Evolutionary Neural Networks for Deep Learning), is designed to optimize neural networks for these unique and varied datasets. The developers were able to utilize Titan’s 18,000-plus NVIDIA K20x GPUs to explore thousands of potential neural networks simultaneously on a given problem. The software uses Caffe, a popular deep learning framework, to perform the underlying computations, as well as MPI, to distribute the data across the nodes.

The algorithm works by eliminating the poor performing neural networks and sifting through the higher performing ones, until an optimum solution is found.  By doing all this in software, MENNDL eliminates the trial-and-error typically performed by data scientists when they program these neural networks by hand.

“There’s no clear set of instructions scientists can follow to tweak networks to work for their problem,” said research scientist Steven Young, a member of ORNL’s Nature Inspired Machine Learning team. “With MENNDL, they no longer have to worry about designing a network. Instead, the algorithm can quickly do that for them, while they focus on their data and ensuring the problem is well-posed.”

Using the power of Titan, these automatically-generated networks can be produced in a matter of hours or days, rather than the months required by data scientists. One of MENNDL’s first attempts was in the field of neutrino physics. Using 800,000 images of neutrino events generated by detectors at DOE’s Fermi National Accelerator Laboratory (Fermilab), the algorithm was able to produce optimized networks that analyzed and classified the events with high accuracy. MENNDL accomplished this by evaluating about 500,000 neural network candidates in just 24 hours. The results could help improve the efficiency of these neutrino measurements and help physicists improve their understanding of these interactions.

The research team is already looking ahead to 2018, when Summit is scheduled to come online. That system will be outfitted with more than 27,000 of NVIDIA’s new V100 GPUs. That will provide developers with over three peak exaflops of deep learning performance – more than 40 times of what is available with Titan’s five-year-old K20x GPUs.

“That means we’ll be able to evaluate larger networks much faster and evolve many more generations of networks in less time,” Young said.

Image source: Oak Ridge National Laboratory