This is the 65th edition of the TOP500.
Here is a summary of the system in the Top 10:
The El Capitan system at the Lawrence Livermore National Laboratory, California, USA remains the No. 1 system on the TOP500. The HPE Cray EX255a system was measured with 1.742 Exaflop/s on the HPL benchmark. El Capitan has 11,039,616 cores and is based on AMD 4th generation EPYC™ processors with 24 cores at 1.8 GHz and AMD Instinct™ MI300A accelerators. It uses the HPE Slingshot interconnect for data transfer and achieves an energy efficiency of 58.9 Gigaflops/watt. The system also achieved 17.41 Petaflop/s on the HPCG benchmark which makes it the new leader on this ranking as well
Frontier is the No. 2 system in the TOP500. This HPE Cray EX system was the first US system with a performance exceeding one Exaflop/s. It is installed at the Oak Ridge National Laboratory (ORNL) in Tennessee, USA, where it is operated for the Department of Energy (DOE). It currently has achieved 1.353 Exaflop/s using 8,699,904 cores. The HPE Cray EX architecture combines 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI, with AMD Instinct™ 250X accelerators, and a Slingshot interconnect.
Aurora is currently the No. 3 with a HPL score of 1.012 Exaflop/s. It is installed at the Argonne Leadership Computing Facility, Illinois, USA, where it is also operated for the Department of Energy (DOE). This new Intel system is based on HPE Cray EX - Intel Exascale Compute Blades. It uses Intel Xeon CPU Max Series processors, Intel Data Center GPU Max Series accelerators, and a Slingshot interconnect.
JUPITER Booster is the new No. 4 system. It is installed at EuroPHC/FZJ in Jülich, Germany where it is operated by the Jülich Supercomputing Centre. It is based on the Eviden’s BullSequana XH3000 direct liquid cooled architecture which utilizes Grace Hopper Superchips. It is currently being commissioned and achieved a preliminary HPL value of 793.4 Petaflop/s on a partial system.
Eagle the No. 5 system is installed by Microsoft in its Azure cloud. This Microsoft NDv5 system is based on Xeon Platinum 8480C processors and NVIDIA H100 accelerators and achieved an HPL score of 561 Petaflop/s.
The No. 6 system is called HPC6 and installed at Eni S.p.A center in Ferrera Erbognone in Italy. It is another HPE Cray EX235a system with 3rd Gen AMD EPYC™ CPUs optimized for HPC and AI, with AMD Instinct™ 250X accelerators, and a Slingshot interconnect. It achieved 477.9 Petaflop/s.
Fugaku, the No. 7 system, is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan. It has 7,630,848 cores which allowed it to achieve an HPL benchmark score of 442 Petaflop/s. It is now the second fastest system on the HPCG benchmark with 16 Teraflop/s.
The Alps system installed at the Swiss National Supercomputing Centre (CSCS) in Switzerland is now at No. 8. It is an HPE Cray EX254n system with NVIDIA Grace 72C and NVIDIA GH200 Superchip and a Slingshot interconnect. It achieved 434.9 Petaflop/s.
The LUMI system, another HPE Cray EX system installed at EuroHPC center at CSC in Finland is at the No. 9 with a performance of 380 Petaflop/s. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data. One of the pan-European pre-Exascale supercomputers, LUMI, is located in CSC’s data center in Kajaani, Finland.
The No. 10 system Leonardo is installed at another EuroHPC site in CINECA, Italy. It is an Atos BullSequana XH2000 system with Xeon Platinum 8358 32C 2.6GHz as main processors, NVIDIA A100 SXM4 40 GB as accelerators, and Quad-rail NVIDIA HDR100 Infiniband as interconnect. It achieved a HPL performance of 241.2 Petaflop/s.
A total of 237 systems on the list are using accelerator/co-processor technology, up from 210 six months ago. 82 of these use 18 chips, 68 use NVIDIA Ampere, and 27 systems with NVIDIA Volta.
Intel continues to provide the processors for the largest share (58.80 percent) of TOP500 systems, down from 61.80 % six months ago. 173 (34.60 %) of the systems in the current list used AMD processors, up from 32.40 % six months ago.
The entry level to the list moved up to the 2.44 Pflop/s mark on the Linpack benchmark.
The last system on the newest list was listed at position 456 in the previous TOP500.
Total combined performance of all 500 exceeded the Exaflop barrier with now 13.84 exaflop/s (Eflop/s) up from 11.72 exaflop/s (Eflop/s) 6 months ago.
The entry point for the TOP100 increased to 16.59 Pflop/s.
The average concurrency level in the TOP500 is 275,414 cores per system up from 257,970 six months ago.
In the Green500 the systems of the TOP500 are ranked by how much computational performance they deliver on the HPL benchmark per Watt of electrical power consumed. This electrical power efficiency is measured in Gigaflops/Watt. This ranking is not driven by the size of a system but by its technology and the ranking order looks therefor very different from the TOP500. The computational efficiency of a system tends to slightly decrease with system size, which among technologically identical system gives smaller system the advantage. Here are the top 10 of the Green500 ranking:
JEDI once again claimed the No. 1 spot – JUPITER Exascale Development Instrument, a system from EuroHPC/FZJ in Germany. JEDI repeated its energy efficiency rating from the last list at 72.73 GFlops/Watt while producing an HPL score of 4.5 PFlop/s. JEDI is a BullSequana XH3000 machine with a Grace Hopper Superchip, an NVIDIA GH200 Superchip, Quad-Rail NVIDIA InfiniBand NDR200, and 19,584 total cores.
In the second place is the ROMEO-2025 system at the ROMEO HPC Center - Champagne- Ardenne in France. With 47,328 total cores and an HPL benchmark of 9.863 PFlop/s, and achieved an efficiency of 70.9 GFlops/Watt. The architecture of this system is identical to the No. 1 system JEDI, but as it is more than twice as large its energy efficiency is slightly lower.
The data collection and curation of the Green500 project has been integrated with the TOP500 project. This allows submissions of all data through a single webpage at http://top500.org/submit.
The TOP500 list now includes the High-Performance Conjugate Gradient (HPCG) Benchmark results.
El Capitan is the new leader on the HPCG benchmark with 17.1 HPCG-PFlop/s.
Supercomputer Fugaku, the long-time leader, is now in second position with 16 HPCG-PFlop/s.
The DOE system Frontier at ORNL remains in the third position with 14.05 HPCG-PFlop/s.
The Aurora system is now in fourth position with 5.6 HPCG-PFlop/s.
On the HPL-MxP benchmark, which measures performance for mixed-precision calculations, the Aurora system achieved 11.6 Exaflops narrowly ahead of Frontier at 11.4 Exaflops. This is the same situation as last time: both machines submitted new result and Aurora came out ahead for the second time.
The HPL-MxP benchmark seeks to highlight the use of mixed precision computations. Traditional HPC uses 64-bit floating point computations. Today we see hardware with various levels of floating point precisions, 32-bit, 16-bit, and even 8-bit. The HPL-MxP benchmark demonstrates that by using mixed precision during the computation much higher performance is possible (see the Top 5 from the HPL-MxP benchmark), and using mathematical techniques, the same accuracy can be computed with the mixed precision technique when compared with straight 64-bit precision.
This year’s winner of the HPL-MxP category is the El Capitan system with 16.7 Exaflop/s.
Aurora is in second place with an 11.6 Exaflop/s score on the HPL-MxP benchmark.
Frontier remains in third place with a score of 11.4 Exaflop/s.
Rank | Site | Computer | Cores | HPL Rmax (Eflop/s) |
TOP500 Rank |
HPL-MxP (Eflop/s) |
Speedup |
---|---|---|---|---|---|---|---|
1 | DOE/SC/LLNL, USA | El Capitan, HPE Cray 255a, AMD 4th Gen EPYC 24C 1.8 GHz, AMD Instinct MI300A, Slingshot-11 | 11,039,616 | 1.742 | 1 | 16.7 | 9.6 |
2 | DOE/SC/ANL, USA | Aurora, HPE Cray EX, Intel Max 9470 52C, 2.4 GHz, Intel GPU MAX, Slingshot-11 | 8,159,232 | 1.012 | 3 | 11.6 | 11.5 |
3 | DOE/SC/ORNL, USA | Frontier, HPE Cray EX235a, AMD Zen-3 (Milan) 64C 2GHz, AMD MI250X, Slingshot-11 | 8,560,640 | 1.353 | 2 | 11.4 | 8.4 |
4 | AIST, Japan | ABCI 3.0, HPE Cray XD670, Xeon Platinum 8558 48C 2.1GHz, NVIDIA H200 SXM5 141 GB, InfiniBand NDR200, HPE | 479,232 | 0.145 | 15 | 2.36 | 16.3 |
5 | EuroHPC/CSC, Finland | LUMI, HPE Cray EX235a, AMD Zen-3 (Milan) 64C 2GHz, AMD MI250X, Slingshot-11 | 2,752,704 | 0.380 | 9 | 2.35 | 6.2 |
6 | RIKEN Center for Computational Science, Japan | Fugaku, Fujitsu A64FX 48C 2.2GHz, Tofu D | 7,630,848 | 0.442 | 7 | 2.0 | 4.5 |
7 | EuroHPC/CINECA, Italy | Leonardo, BullSequana XH2000, Xeon Platinum 8358 32C 2.6GHz, NVIDIA A100 SXM4 40 GB, Quad-rail NVIDIA HDR100 InfiniBand | 1,824,768 | 0.241 | 10 | 1.8 | 7.6 |
8 | CII, Institute of Science, Japan | TSUBAME 4, HPE Cray XD665, AMD EPYC 9654 96C 2.4GHz, NVIDIA H100 SXM5 94 GB, Mellanox NDR200 | 172,800 | 0.025 | 46 | 0.64 | 25.0 |
9 | NVIDIA, USA | Selene, DGX SuperPOD, AMD EPYC 7742 64C 2.25 GHz, Mellanox HDR, NVIDIA A100 | 555,520 | 0.063 | 30 | 0.63 | 9.9 |
10 | DOE/SC/LBNL/NERSC, USA | Perlmutter, HPE Cray EX235n, AMD EPYC 7763 64C 2.45 GHz, Slingshot-10, NVIDIA A100 | 761,856 | 0.079 | 25 | 0.59 | 7.5 |
The first version of what became today’s TOP500 list started as an exercise for a small conference in Germany in June 1993. Out of curiosity, the authors decided to revisit the list in November 1993 to see how things had changed. About that time they realized they might be onto something and decided to continue compiling the list, which is now a much-anticipated, much-watched and much-debated twice-yearly event.