This is the 61st edition of the TOP500.
The 61st edition of the TOP500 shows the Frontier system to remain the only true exascale machine with an HPL score of 1.194 Exaflop/s.
The Frontier system at the Oak Ridge National Laboratory, Tennessee, USA remains the No. 1 system on the TOP500 and is still the only system reported with an HPL performance exceeding one Exaflop/s. Frontier brought the pole position back to the USA one year ago on the June 2022 listing with an HPL score of 1.194 Exaflop/s.
Frontier is based on the latest HPE Cray EX235a architecture and equipped with AMD EPYC 64C 2GHz processors. The system has 8,699,904 total cores, a power efficiency rating of 52.23 gigaflops/watt, and relies on Slingshot-11 interconnect for data transfer.
The top position was previously held from June 2020 until November 2021 by the Fugaku system at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. With its HPL benchmark score of 442 Pflop/s, Fugaku is now listed as No. 2.
The LUMI system at EuroHPC/CSC in Finland entered the list in June 2022 at No. 3. It is listed as No. 3 after an upgrade of the system last november and has an HPL score of 309.1 Pflop/s. With this it remains the largest system in Europe.
The Leonardo system at EuroHPC/CINECA in Italy was first listed six month ago at No. 4. It was upgraded as well and remains No. 4 with an improved HPL score of 239 Pflop/s. It is based on the Atos BullSequana XH2000 architecture.
Here a brief summary of the system in the Top10:
Frontier is the No. 1 system in the TOP500. This HPE Cray EX system is the first US system with a performance exceeding one Exaflop/s. It is installed at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, where it is operated for the Department of Energy (DOE). It currently has achieved 1.194 Exaflop/s using 8,699,904 cores. The HPE Cray EX architecture combines 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI, with AMD Instinct™ 250X accelerators, and Slingshot-11 interconnect.
Fugaku, the No. 2 system, is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Pflop/s.
The LUMI system, another HPE Cray EX system installed at EuroHPC center at CSC in Finland is the No. 3 with a performance of 309.1 Pflop/s. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data. One of the pan-European pre-Exascale supercomputers, LUMI, is located in CSC’s data center in Kajaani, Finland.
The No. 4 system Leonardo is installed at a different EuroHPC site in CINECA, Italy. It is an Atos BullSequana XH2000 system with Xeon Platinum 8358 32C 2.6GHz as main processors, NVIDIA A100 SXM4 40 GB as accelerators, and Quad-rail NVIDIA HDR100 Infiniband as interconnect. It achieved a Linpack performance of 238.7 Pflop/s.
Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, is again listed at the No. 5 spot worldwide with a performance of 148.8 Pflop/s on the HPL benchmark, which is used to rank the TOP500 list. Summit has 4,356 nodes, each one housing two POWER9 CPUs with 22 cores each and six NVIDIA Tesla V100 GPUs each with 80 streaming multiprocessors (SM). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.
Sierra, a system at the Lawrence Livermore National Laboratory, CA, USA is at No. 6. Its architecture is very similar to the #5 system’s Summit. It is built with 4,320 nodes with two POWER9 CPUs and four NVIDIA Tesla V100 GPUs. Sierra achieved 94.6 Pflop/s.
Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, which is in China's Jiangsu province is listed at the No. 7 position with 93 Pflop/s.
Perlmutter at No. 8 is based on the HPE Cray “Shasta” platform and a heterogeneous system with AMD EPYC-based nodes and 1,536 NVIDIA A100 accelerated nodes. Perlmutter achieved 64.6 Pflop/s
Selene now at No. 9 is an NVIDIA DGX A100 SuperPOD installed inhouse at NVIDIA in the USA. The system is based on AMD EPYC processor with NVIDIA A100 for acceleration and a Mellanox HDR InfiniBand as network and achieved 63.4 Pflop/s.
Tianhe-2A (Milky Way-2A), a system developed by China’s National University of Defense Technology (NUDT) and deployed at the National Supercomputer Center in Guangzhou, China is now listed as the No. 10 system with 61.4 Pflop/s.
A total of 184 systems on the list are using accelerator/co-processor technology, up from 179 six months ago. 73 of these use NVIDIA Ampere chips, 5 use 18, and 76 systems with NVIDIA Volta.
Intel continues to provide the processors for the largest share (72.00 percent) of TOP500 systems, down from 75.80 % six months ago. 121 (24.20 %) of the systems in the current list used AMD processors, up from 20.20 % six months ago.
The entry level to the list moved up to the 1.87 Pflop/s mark on the Linpack benchmark.
The last system on the newest list was listed at position 456 in the previous TOP500.
Total combined performance of all 500 exceeded the Exaflop barrier with now 5.24 exaflop/s (Eflop/s) up from 4.86 exaflop/s (Eflop/s) 6 months ago.
The entry point for the TOP100 increased to 6.32 Pflop/s.
The average concurrency level in the TOP500 is 190,919 cores per system up from 189,586 six months ago.
The system to claim the No. 1 spot for the GREEN500 is Henri at Flatiron Institute in the US. With 5920 total cores and an HPL benchmark of 2.04 PFlop/s. Henri is a Lenovo ThinkSystem SR670 with Intel Xeon Platinum and Nvidia H100. The No. 3 spot was taken by the LUMI system. A HPE Cray EX235a system with AMD EPYC and AMD Instinct MI250X.
In the second place is the Frontier Test & Development System (TDS) at ORNL in the US. With 120,832 total cores and an HPL benchmark of 19.2 PFlop/s, the Frontier TDS machine is basically just one rack identical to the actual Frontier system.
The No. 3 spot was taken by the Adastra system. A HPE Cray EX235a system with AMD EPYC and AMD Instinct MI250X.
Supercomputer Fugaku remains the leader on the HPCG benchmark with 16 PFlop/s.
The DOE system Frontier at ORNL claims the second position with 14.05 HPCG-Pflop/s.
The third position was captured by the upgraded LUMI system with 3.40 HPCG-petaflops.
On the HPL-MxP (formally HPL-AI) benchmark, which measures performance for mixed- precision calculation, Frontier already demonstrated 9.95 Exaflops! The HPL-MxP benchmark seeks to highlight the use of mixed precision computations. Traditional HPC uses 64-bit floating point computations. Today we see hardware with various levels of floating point precisions, 32-bit, 16-bit, and even 8-bit. The HPL-MxP benchmark demonstrates that by using mixed precision during the computation much higher performance is possible (see the Top 5 from the HPL-MxP benchmark), and using mathematical techniques, the same accuracy can be computed with the mixed precision technique when compared with straight 64-bit precision.
Rank HPL-MxP |
Site |
Computer |
Cores |
HPL-MxP (Eflop/s) |
TOP500 Rank |
HPL Rmax (Eflop/s) |
Speedup of HPL-MxP over HPL |
1 |
DOE/SC/ORNL, USA |
Frontier, HPE Cray EX235a |
8,699,904 |
9.950 |
1 |
1.194 |
8.3 |
2 |
EuroHPC/CSC, Finland |
LUMI, HPE Cray EX235a |
2,174,976 |
2.168 |
3 |
0.3091 |
7.0 |
3 |
RIKEN, Japan |
Fugaku, Fujitsu A64FX |
7,630,848 |
2.000 |
2 |
0.4420 |
4.5 |
4 |
EuroHPC/CINECA, Italy |
Leonardo, Bull Sequana XH2000 |
1,824,768 |
1.842 |
4 |
0.2387 |
7.7 |
5 |
DOE/SC/ORNL, USA |
Summit, IBM AC922 POWER9 |
2,414,592 |
1.411 |
5 |
0.1486 |
9.5 |
The first version of what became today’s TOP500 list started as an exercise for a small conference in Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how things had changed. About that time they realized they might be onto something and decided to continue compiling the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.