This is the 52nd edition of the TOP500.
Two IBM build systems called Summit and Sierra and installed at DOE’s Oak Ridge National Laboratory (ORNL) in Tennessee and Lawrence Livermore National Laboratory in California brought back the first two positions in the TOP500 to the USA. Both system improved their High Performance Linpack (HPL) since they appeared on the TOP500 listing half a year ago.
The number of installations in China continues to rise strongly. 45 percent of all system are now listed as being installed in China. The number of system listed in the USA continues to decline and has now reached an all time low of 22 percent. However, systems in the USA are on average larger, which allowed the USA (38%) to stay ahead of China (31%) in terms of installed performance.
The impact of the new technology of the Summit and Sierra system is also visible in the HPCG benchmark rating and the Green 500. Both system hold the top positions on the HPCG ranking ahead of Japans K-Computer as No 3. On the Green 500, which is traditionally dominated by smaller and more experimental system, they managed to be listed at No 6 and No 8. The Green 500 is again lead by the Shoubu System B at RIKEN in Japan.
Summit and Sierra improved in performance and brought the #1 and #2 spot back to the USA
The No 7 system SuperMuc is newly installed. A few other systems (No 1, 2, 6) improved in performance.
Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, remains at the #1 spot with an improved performance of 143.5 Pflop/s on the HPL benchmark, which is used to rank the TOP500 list. Summit has 4,356 nodes, each one housing two Power9 CPUs with 22 cores each and six NVIDIA Tesla V100 GPUs each with 80 streaming multiprocessors (SM). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.
Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA moved up one rank and is now listed at #2. It’s architecture is very similar to the new #1 systems Summit. It is build with 4,320 nodes with two Power9 CPUs and four NVIDIA Tesla V100 GPUs. Sierra achieved 94.6 Pflop/s.
Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, which is in China's Jiangsu province was in the lead for 2 years, but was mow pushed to the #3 position with 93 Pflop/s.
Tianhe-2A (Milky Way-2A), a system developed by China’s National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzho, China was upgraded earlier this year by replacing the Xeon PHI accelerators with the new proprietary Matrix-2000 chips. It is now the No. 4 system with 61.4 Pflop/s.
The No. 5 is the Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland and the most powerful system in Europe.
Trinity a Cray XC40 system operated by Los Alamos National Laboratory and Sandia National Laboratories and located at Los Alamos improved it’s performance to 20.2 Pflop/s, which puts it at the No. 6 position.
The AI Bridging Cloud Infrastructure (ABCI) is installed in Japan at the National Institute of Advanced Industrial Science and Technology (AIST) and listed as #7 with a performance of 19,9 Pflop/s. The Fujitsu build system is using Xeon Gold processors with 20 cores and the NVIDIA Tesla V100 as well.
SuperMUC-NG is the next-generation high-end supercomputer at the Leibniz-Rechenzentrum (Leibniz Supercomputing Centre) in Garching near Munich. With more than 311,040 cores and a HPL performance of 19.5 PFLop/s it is listed at No 8.
Titan, a Cray XK7 system installed at the Department of Energy’s (DOE) Oak Ridge National Laboratory, and previously the largest system in the USA is now the No.9 system. It achieved 17.59 Pflop/s using 261,632 of its NVIDIA K20x accelerator cores.
Sequoia, an IBM BlueGene/Q system installed at DOE’s Lawrence Livermore National Laboratory, is the No. 10 system. It was first delivered in 2011 and has achieved 17.17 Pflop/s using 1,572,864 cores.
There are 429 systems with performance greater than a petaflop/s on the list, up from 272 six months ago.
A total of 138 systems on the list are using accelerator/co-processor technology, up from 110 six months ago. 0 of these use NVIDIA Ampere chips, 0 use 18, and there are now 46 systems with NVIDIA Volta.
Intel continues to provide the processors for the largest share (95.20 percent) of TOP500 systems.
We have incorporated the HPCG benchmark results into the Top500 list to provide a more balanced look at performance.
The 2 top DOE systems Sierra and Summit also lead with respect to HPCG performance. They are followed by the Japanese K-Computer, which due to its balanced architecture and comparable high memory bandwidth remains the No 3 on the HPCG list.
Japanese systems continue to take leading roles in the Green500. However, the top 2 DOE systems Sierra and Summit also make the top10 in the Green500 and demonstrate the progress in performance efficiency.
The entry level to the list moved up to the 874.80 Tflop/s mark on the Linpack benchmark.
The last system on the newest list was listed at position 341 in the previous TOP500.
Total combined performance of all 500 exceeded the Exaflop barrier with now 1.41 exaflop/s (Eflop/s) up from 1.21 exaflop/s (Eflop/s) 6 months ago.
The entry point for the TOP100 increased to 1,966,080.00 Pflop/s.
The average concurrency level in the TOP500 is 118,173 cores per system up from 116,111 six months ago.
In second position is the DGX SaturnV Volta system, a NVIDIA system installed at NVIDIA, USA. It achieve 15.1 GFlops/Watt energy efficiency. It is on position 375 in the TOP500.
They are followed on No 3 by Summit at the Oak Ridge National Laboratory (ORNL) in Tennessee. It achieved 14.7 gigaflops/watt and is listed at number one in the TOP500.
The first version of what became today’s TOP500 list started as an exercise for a small conference in Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how things had changed. About that time they realized they might be onto something and decided to continue compiling the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.