This is the 66th edition of the TOP500.
Here is a summary of the system in the Top 10:
The El Capitan system at the Lawrence Livermore National Laboratory, California, USA remains the No. 1 system on the TOP500. The HPE Cray EX255a system was measured with 1.809 Exaflop/s on the HPL benchmark. El Capitan has 11,340,000 cores and is based on AMD 4th generation EPYC™ processors with 24 cores at 1.8 GHz and AMD Instinct™ MI300A accelerators. It uses the Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 60.94 Gigaflops/watt. The system also remains the leader on the HPCG benchmark ranking with 17.41 Petaflop/s.
Frontier is the No. 2 system in the TOP500. It is installed at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, where it is operated for the Department of Energy (DOE). It has achieved 1.353 Exaflop/s using 9,066,176 cores. The HPE Cray EX architecture combines 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI, with AMD Instinct™ 250X accelerators, and a Slingshot-11 interconnect.
Aurora is the No. 3 with a HPL score of 1.012 Exaflop/s. It is installed at the Argonne Leadership Computing Facility, Illinois, USA, where it is also operated for the Department of Energy (DOE). The Intel system is based on HPE Cray EX - Intel Exascale Compute Blades. It uses Intel Xeon CPU Max Series processors, Intel Data Center GPU Max Series accelerators, and a Slingshot-11 interconnect.
JUPITER Booster is the No. 4 system. It is installed at EuroPHC/FZJ in Jülich, Germany where it is operated by the Jülich Supercomputing Centre. It is based on the Eviden’s BullSequana XH3000 direct liquid cooled architecture which utilizes Grace Hopper Superchips. It is now fully installed and submitted a measurement of 1.000 Exaflops/ making the fourth Exascale system ever and the first one in Europe.
Eagle the No. 5 system is installed by Microsoft in its Azure cloud. This Microsoft NDv5 system is based on Xeon Platinum 8480C processors and NVIDIA H100 accelerators and achieved an HPL score of 561 Petaflop/s.
The No. 6 system is called HPC6 and installed at Eni S.p.A center in Ferrera Erbognone in Italy. It is another HPE Cray EX235a system with 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI, with AMD Instinct™ 250X accelerators, and a Slingshot-11 interconnect. It achieved 477.9 Petaflop/s.
Fugaku, the No. 7 system, is installed in June 2020 at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Petaflop/s. It is still the second fastest system on the HPCG benchmark with 16 Teraflop/s. Fugaku is the oldest of the Top 10 systems, making its entry on the list in June 2020.
The Alps system installed at the Swiss National Supercomputing Centre (CSCS) in Switzerland is at No. 8. It is an HPE Cray EX254n system with NVIDIA Grace 72C and NVIDIA GH200 Superchip and a Slingshot-11 interconnect. It achieved 434.9 Petaflop/s.
The LUMI system, another HPE Cray EX system installed at EuroHPC center at CSC in Finland is at the No. 9 with a performance of 380 Petaflop/s. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data. One of the pan-European pre-Exascale supercomputers, LUMI, is located in CSC’s data center in Kajaani, Finland.
The No. 10 system Leonardo is installed at another EuroHPC site in CINECA, Italy. It is an Atos BullSequana XH2000 system with Xeon Platinum 8358 32C 2.6GHz as main processors, NVIDIA A100 SXM4 40 GB as accelerators, and Quad-rail NVIDIA HDR100 Infiniband as interconnect. It achieved a HPL performance of 241.2 Petaflop/s.
A total of 255 systems on the list are using accelerator/co-processor technology, up from 237 six months ago. 94 of these use 18 chips, 63 use NVIDIA Ampere, and 29 systems with 17.
Intel continues to provide the processors for the largest share (57.20 percent) of TOP500 systems, down from 58.80 % six months ago. 177 (35.40 %) of the systems in the current list used AMD processors, up from 34.60 % six months ago.
The entry level to the list moved up to the 2.58 Pflop/s mark on the Linpack benchmark.
The last system on the newest list was listed at position 472 in the previous TOP500.
Total combined performance of all 500 exceeded the Exaflop barrier with now 15.01 exaflop/s (Eflop/s) up from 13.84 exaflop/s (Eflop/s) 6 months ago.
The entry point for the TOP100 increased to 18.08 Pflop/s.
The average concurrency level in the TOP500 is 271,409 cores per system up from 275,414 six months ago.
In the Green500 the systems of the TOP500 are ranked by how much computational performance they deliver on the HPL benchmark per Watt of electrical power consumed. This electrical power efficiency is measured in Gigaflops/Watt. This ranking is not driven by the size of a system but by its technology and the ranking order looks therefor very different from the TOP500. The computational efficiency of a system tends to slightly decrease with system size, which among technologically identical system gives smaller system the advantage.
This edition of the Green500 sees three systems with identical architecture on the top 3 positions. They are all BullSequana XH3000 system which use Grace Hopper Superchip 72C 3GHz, NVIDIA GH200 Superchip, and Quad-Rail NVIDIA InfiniBand NDR200.
KAIROS is a new BullSequana XH3000 system at the CALMIP / University of Toulouse – CNRS center. It became the new No. 1 on the Green500 with an energy efficiency of 73.28 GigaFlops/Watt. It achieved 3.05 PetFlop/s on the HPL benchmark. The system uses Grace Hopper Superchip 72C 3GHz, NVIDIA GH200 Superchip, and Quad-Rail NVIDIA InfiniBand NDR200.
In the second place is the ROMEO-2025 system at the ROMEO HPC Center - Champagne-Ardenne in France. With 47,328 total cores and an HPL benchmark of 9.863 PFlop/s, and achieved an efficiency of 70.9 GFlops/Watt. The architecture of this system is identical to the No. 1 system KAIROS. It is substantially larger then KAIROS which results in its energy efficiency being slightly lower.
The No. 3 spot was taken by the Levante GPU extension system at the DKRZ - Deutsches Klimarechenzentrum in Germany. It also has an identical architecture to the No 1. And No. 3 systems and achieved 6.747 PFlop/s HPL performance and an efficiency of 69.43 GFlops/Watt.
Here are the top 10 of the Green500 ranking:
The data collection and curation of the Green500 project has been integrated with the TOP500 project. This allows submissions of all data through a single webpage at http://top500.org/submit.
The TOP500 list now includes the High-Performance Conjugate Gradient (HPCG) Benchmark results.
El Capitan remains the leader on the HPCG ranking with 17.41 HPCG-Petaflop/s.
Supercomputer Fugaku remains in second place on the HPCG benchmark, with 16 HPCG-Petaflop/s.
Frontier at ORNL is in third position with 14.05 HPCG-Petaflop/s.
Aurora is now at the fourth position with 5.6 HPCG-Petaflop/s.
The new JUPITER Booster system has not submitted an HPCG result yet.
On the HPL-MxP benchmark, which measures performance for mixed-precision calculations, the El Capitan system achieved 16.7 Exaflop/s and remains the No. 1 system.
The HPL-MxP benchmark highlights the use of mixed-precision computations. Traditional HPC uses 64-bit floating-point computations. Today, we see hardware with various levels of floating-point precision, 32-bit, 16-bit, and even 8-bit. The HPL-MxP benchmark demonstrates that by using mixed-precision during the computation, much higher performance is possible (see the Top 5 from the HPL-MxP benchmark), and using mathematical techniques, the same accuracy can be computed with the mixed-precision technique when compared with straight 64-bit precision.
This year’s winner of the HPL-MxP category is the El Capitan system with 16.7 Exaflop/s
Aurora is in second place with an 11.6 Exaflop/s score on the HPL-MxP benchmark.
Frontier remains in third place with a score of 11.4 Exaflop/s.
The Softbank system achieved an impressive 24.4 speedup over HPL on the HPL-MxP benchmark.
| Rank | Site | Computer | Cores | HPL Rmax (Eflop/s) |
TOP500 Rank |
HPL-MxP (Eflop/s) |
Speedup |
|---|---|---|---|---|---|---|---|
| 1 | DOE/SC/LLNL, USA | El Capitan, HPE Cray 255a, AMD 4th Gen EPYC 24C 1.8 GHz, AMD Instinct MI300A, Slingshot-11 | 11,039,616 | 1.809 | 1 | 16.7 | 9.2 |
| 2 | DOE/SC/ANL, USA | Aurora, HPE Cray EX, Intel Max 9470 52C, 2.4 GHz, Intel GPU MAX, Slingshot-11 | 8,159,232 | 1.012 | 3 | 11.6 | 11.5 |
| 3 | DOE/SC/ORNL, USA | Frontier, HPE Cray EX235a, AMD Zen-3 (Milan) 64C 2GHz, AMD MI250X, Slingshot-11 | 8,560,640 | 1.353 | 2 | 11.4 | 8.4 |
| 4 | EuroHPC/FZJ, Germany | JUPITER Booster - BullSequana XH3000, GH Superchip 72C 3GHz, NVIDIA GH200 Superchip, Quad-Rail NVIDIA InfiniBand NDR200, EVIDEN | 4,801,344 | 1.0 | 4 | 6.25 | 6.3 |
| 5 | Softbank, Japan | CHIE-4 - NVIDIA DGX B200, Xeon Platinum 8570 56C 2.1GHz, NVIDIA B200 SXM 180GB, Infiniband NDR400, NVIDIA | 662,256 | 0.135 | 17 | 3.3 | 24.4 |
The first version of what became today’s TOP500 list started as an exercise for a small conference in Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how things had changed. About that time they realized they might be onto something and decided to continue compiling the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.