Highlights - November 2019
This is the 54th edition of the TOP500.
Since June 2019 only Petaflop systems have been able to make the list. The total aggregate performance of all 500
system has now
risen to 1.65 Exaflops.
Two IBM build systems called Summit and Sierra and installed at DOE’s Oak Ridge National Laboratory (ORNL) in
Tennessee and Lawrence Livermore National Laboratory in California kept the first two positions in the TOP500 in
the USA.
The share of installations in China continues to rise strongly. 45.6 % of all
system are now listed as being installed in China. The share of system listed in the USA remains near it's all
time low
at 23.4 %.
However, systems in the USA are on average larger, which allowed the USA
(37.1%) to stay close to China
(32.3%) in terms of installed performance.
There were no changes to the top of the list at all. The first new system shows up only at position 24! It is an
IBM Power based system utilizing NVidia Volta GV100 which allowed it to capture the No 3 spot on the Green500
list.
Highlights from the Top 10
Summit and Sierra kept the #1 and #2 spot in the USA
Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, remains at the
#1 spot with a performance of 148.8 Pflop/s on the HPL benchmark, which is used to rank the TOP500
list. Summit has 4,356 nodes, each one housing two Power9 CPUs with 22 cores each and six NVIDIA Tesla V100
GPUs each with 80 streaming multiprocessors (SM). The nodes are linked together with a Mellanox dual-rail
EDR InfiniBand network.
Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA stayed at #2. It’s architecture
is very similar to the new #1 systems Summit. It is build with 4,320 nodes with two Power9 CPUs and four
NVIDIA Tesla V100 GPUs. Sierra achieved 94.6 Pflop/s.
Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer
Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, which is in
China's Jiangsu province was in the lead for the first 2 years of its life, and is now listed at the #3
position with 93 Pflop/s.
Tianhe-2A (Milky Way-2A), a system developed by China’s National University of Defense Technology (NUDT)
and deployed at the National Supercomputer Center in Guangzho, China remained the No. 4 system with 61.4
Pflop/s.
Frontera, a Dell C6420 system was installed at the Texas Advanced Computing Center of the University of
Texas earlier this year and is listed at No. 5. It achieved 23.5 Pflop/s using 448,448 of its intel Xeon
cores.
Rank |
Site |
System |
Cores |
Rmax (TFlop/s) |
Rpeak (TFlop/s) |
Power (kW) |
1 |
DOE/SC/Oak Ridge National Laboratory United States
|
Summit - IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
|
2,414,592 |
148,600.0 |
200,794.9 |
|
2 |
DOE/NNSA/LLNL United States
|
Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
|
1,572,480 |
94,640.0 |
125,712.0 |
|
3 |
National Supercomputing Center in Wuxi China
|
Sunway TaihuLight - Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway
|
10,649,600 |
93,014.6 |
125,435.9 |
|
4 |
National Super Computer Center in Guangzhou China
|
Tianhe-2A - TH-IVB-FEP Cluster, Intel Xeon E5-2692v2 12C 2.2GHz, TH Express-2, Matrix-2000
|
4,981,760 |
61,444.5 |
100,678.7 |
|
5 |
Texas Advanced Computing Center/Univ. of Texas United States
|
Frontera - Dell C6420, Xeon Platinum 8280 28C 2.7GHz, Mellanox InfiniBand HDR
|
448,448 |
23,516.4 |
38,745.9 |
|
6 |
Swiss National Supercomputing Centre (CSCS) Switzerland
|
Piz Daint - Cray XC50, Xeon E5-2690v3 12C 2.6GHz, Aries interconnect , NVIDIA Tesla P100
|
387,872 |
21,230.0 |
27,154.3 |
|
7 |
DOE/NNSA/LANL/SNL United States
|
Trinity - Cray XC40, Xeon E5-2698v3 16C 2.3GHz, Intel Xeon Phi 7250 68C 1.4GHz, Aries interconnect
|
979,072 |
20,158.7 |
41,461.2 |
|
8 |
National Institute of Advanced Industrial Science and Technology (AIST) Japan
|
AI Bridging Cloud Infrastructure (ABCI) - PRIMERGY CX2570 M4, Xeon Gold 6148 20C 2.4GHz, NVIDIA Tesla V100 SXM2, Infiniband EDR
|
391,680 |
19,880.0 |
32,576.6 |
|
9 |
Leibniz Rechenzentrum Germany
|
SuperMUC-NG - ThinkSystem SD650, Xeon Platinum 8174 24C 3.1GHz, Intel Omni-Path
|
305,856 |
19,476.6 |
26,873.9 |
|
10 |
DOE/NNSA/LLNL United States
|
Lassen - IBM Power System AC922, IBM POWER9 22C 3.1GHz, Dual-rail Mellanox EDR Infiniband, NVIDIA Tesla V100
|
288,288 |
18,200.0 |
23,047.2 |
|
The No. 6 is the Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre
(CSCS) in Lugano, Switzerland and the most powerful system in Europe.
Trinity a Cray XC40 system operated by Los Alamos National Laboratory and Sandia National Laboratories
and located at Los Alamos improved it’s performance to 20.2 Pflop/s, which puts it at the No. 7
position.
The AI Bridging Cloud Infrastructure (ABCI) is installed in Japan at the National Institute of Advanced
Industrial Science and Technology (AIST) and listed as No. 8 with a performance of 16.9 Pflop/s. The Fujitsu
build system is using Xeon Gold processors with 20 cores and the NVIDIA Tesla V100 as well.
SuperMUC-NG is the next-generation high-end supercomputer at the Leibniz-Rechenzentrum (Leibniz
Supercomputing Centre) in Garching near Munich. With more than 311,040 cores and a HPL performance of 19.5
PFLop/s it is listed at No. 9.
The system Lassen at No 10 is an IBM Power System with NVIDIA Tesla V100 accelerators with a performance
of 18.3 PFlop/s.
Highlights from the List
-
A total of 145 systems on the list are using accelerator/co-processor technology,
up from 134 six months ago.
94 of these use NVIDIA Volta
chips, 0 use NVIDIA Ampere, and there are
now 30 systems with NVIDIA Pascal.
Intel continues to provide the processors for the largest share (94.80 percent) of
TOP500
systems.
We have incorporated the HPCG benchmark results into the Top500 list to provide a more balanced look at
performance.
The 2 top DOE systems Sierra and Summit also lead with respect to HPCG performance.
Japanese systems continue to take leading roles in the Green500. However, the top 2 DOE systems Sierra
and
Summit also make the top10 in the Green500 and demonstrate the progress in performance efficiency.
The entry level to the list moved up to the
1,142.0 Tflop/s mark on the Linpack
benchmark.
The last system on the newest list was listed at position 399 in the
previous TOP500.
Total combined performance of all 500 exceeded the Exaflop barrier with
now 1.65 exaflop/s (Eflop/s) up from
1.56 exaflop/s (Eflop/s) 6 months ago.
The entry point for the TOP100 increased to
2.57 Pflop/s.
The average concurrency level in the TOP500 is 126,308 cores
per system up from 118,213 six months ago.
General Trends
Installations by countries/regions:
-
TOP 10 HPC manufacturer:
-
TOP 10 Interconnect Technologies:
-
TOP 10 Processor Technologies:
Green500
- The data collection and curation of the Green500 project has been integrated with the TOP500 project. This
allows submissions of all data through a single webpage at http://top500.org/submit
Rank |
TOP500 Rank |
System |
Cores |
Rmax (TFlop/s) |
Power (kW) |
Power Efficiency (GFlops/watts) |
1 |
159 |
A64FX prototype - Fujitsu A64FX, Fujitsu A64FX 48C 2GHz, Tofu interconnect D
,
Fujitsu Numazu Plant Japan
|
36,864 |
1,999.5 |
|
16.876 |
2 |
420 |
NA-1 - ZettaScaler-2.2, Xeon D-1571 16C 1.3GHz, Infiniband EDR, PEZY-SC2 700Mhz
,
PEZY Computing K.K. Japan
|
1,271,040 |
1,303.2 |
|
16.256 |
3 |
24 |
AiMOS - IBM Power System AC922, IBM POWER9 20C 3.45GHz, Dual-rail Mellanox EDR Infiniband, NVIDIA Volta GV100
,
Rensselaer Polytechnic Institute Center for Computational Innovations (CCI) United States
|
130,000 |
8,045.0 |
|
15.771 |
4 |
373 |
Satori - IBM Power System AC922, IBM POWER9 20C 2.4GHz, Infiniband EDR, NVIDIA Tesla V100 SXM2
,
MIT/MGHPCC Holyoke, MA United States
|
23,040 |
1,464.0 |
|
15.574 |
5 |
1 |
Summit - IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
,
DOE/SC/Oak Ridge National Laboratory United States
|
2,414,592 |
148,600.0 |
|
14.719 |
6 |
8 |
AI Bridging Cloud Infrastructure (ABCI) - PRIMERGY CX2570 M4, Xeon Gold 6148 20C 2.4GHz, NVIDIA Tesla V100 SXM2, Infiniband EDR
,
National Institute of Advanced Industrial Science and Technology (AIST) Japan
|
391,680 |
19,880.0 |
|
14.423 |
7 |
494 |
MareNostrum P9 CTE - IBM Power System AC922, IBM POWER9 22C 3.1GHz, Dual-rail Mellanox EDR Infiniband, NVIDIA Tesla V100
,
Barcelona Supercomputing Center Spain
|
18,360 |
1,145.0 |
|
14.131 |
8 |
23 |
TSUBAME3.0 - SGI ICE XA, IP139-SXM2, Xeon E5-2680v4 14C 2.4GHz, Intel Omni-Path, NVIDIA Tesla P100 SXM2
,
GSIC Center, Tokyo Institute of Technology Japan
|
135,828 |
8,125.0 |
|
13.704 |
9 |
11 |
PANGEA III - IBM Power System AC922, IBM POWER9 18C 3.45GHz, Dual-rail Mellanox EDR Infiniband, NVIDIA Volta GV100
,
Total Exploration Production France
|
291,024 |
17,860.0 |
|
13.065 |
10 |
2 |
Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
,
DOE/NNSA/LLNL United States
|
1,572,480 |
94,640.0 |
|
12.723 |
The most energy-efficient system and No. 1 on the Green500 is a new Fujitsu A64FX prototype installed at
Fujitsu, Japan. It achieved 16.9 GFlops/Watt power-efficiency during its 2.0 Pflop/s Linpack performance
run. It is listed on position 160 in the TOP500.
In second position is the NA-1 system, a PEZY Computing / Exascaler Inc. system which is currently being
readied at PEZY Computing, Japan for a future installation at NA Simulation in Japan. It achieve 16.3
GFlops/Watt power efficiency. It is on position 421 in the TOP500.
The No 3 on the Green500 is AiMOS, a new IBM Power systems at the Rensselaer Polytechnic Institute Center
for Computational Innovations (CCI), New York, USA. It achieved 15.8 GFlops/Watt and is listed at position
25 in the TOP500.
HPCG Results
- The Top500 list now includes the High-Performance Conjugate Gradient (HPCG) Benchmark results.
Rank |
TOP500 Rank |
System |
Cores |
Rmax (TFlop/s) |
HPCG (TFlop/s) |
1 |
1 |
Summit - IBM Power System AC922, IBM POWER9 22C 3.07GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
,
DOE/SC/Oak Ridge National Laboratory United States
|
2,414,592 |
148,600.0 |
|
2 |
2 |
Sierra - IBM Power System AC922, IBM POWER9 22C 3.1GHz, NVIDIA Volta GV100, Dual-rail Mellanox EDR Infiniband
,
DOE/NNSA/LLNL United States
|
1,572,480 |
94,640.0 |
|
3 |
7 |
Trinity - Cray XC40, Xeon E5-2698v3 16C 2.3GHz, Intel Xeon Phi 7250 68C 1.4GHz, Aries interconnect
,
DOE/NNSA/LANL/SNL United States
|
979,072 |
20,158.7 |
|
4 |
8 |
AI Bridging Cloud Infrastructure (ABCI) - PRIMERGY CX2570 M4, Xeon Gold 6148 20C 2.4GHz, NVIDIA Tesla V100 SXM2, Infiniband EDR
,
National Institute of Advanced Industrial Science and Technology (AIST) Japan
|
391,680 |
19,880.0 |
|
5 |
6 |
Piz Daint - Cray XC50, Xeon E5-2690v3 12C 2.6GHz, Aries interconnect , NVIDIA Tesla P100
,
Swiss National Supercomputing Centre (CSCS) Switzerland
|
387,872 |
21,230.0 |
|
6 |
3 |
Sunway TaihuLight - Sunway MPP, Sunway SW26010 260C 1.45GHz, Sunway
,
National Supercomputing Center in Wuxi China
|
10,649,600 |
93,014.6 |
|
7 |
14 |
Nurion - Cray CS500, Intel Xeon Phi 7250 68C 1.4GHz, Intel Omni-Path
,
Korea Institute of Science and Technology Information South Korea
|
570,020 |
13,929.3 |
|
8 |
15 |
Oakforest-PACS - PRIMERGY CX1640 M1, Intel Xeon Phi 7250 68C 1.4GHz, Intel Omni-Path
,
Joint Center for Advanced High Performance Computing Japan
|
556,104 |
13,554.6 |
|
9 |
13 |
Cori - Cray XC40, Intel Xeon Phi 7250 68C 1.4GHz, Aries interconnect
,
DOE/SC/LBNL/NERSC United States
|
622,336 |
14,014.7 |
|
10 |
17 |
Tera-1000-2 - Bull Sequana X1000, Intel Xeon Phi 7250 68C 1.4GHz, Bull BXI 1.2
,
Commissariat a l'Energie Atomique (CEA) France
|
561,408 |
11,965.5 |
|
- The two DOE systems Summit at ORNL and Sierra at LLNL grabbed the first 2 positions on the HPCG benchmark.
Summit achieved 2.93 HPCG-Pflop/s and Sierra 1.80 HPCG-Pflop/s.
About the TOP500 List
The first version of what became today’s TOP500 list started as an exercise for a small conference in
Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how
things had changed. About that time they realized they might be onto something and decided to continue compiling
the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.