Sponsored Article

The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The Evolution, Convergence and Cooling of AI & HPC Gear
Nov. 7, 2024

Years ago, when Artificial Intelligence (AI) began to emerge as a potential technology to be harnessed as a powerful tool to change the way the world works, organizations began to kick the AI tires by exploring it’s potential to enhance their research or business. However, to get started with AI, neural networks needed to be created, data sets trained, and microprocessors were needed that could perform matrix-multiplication calculations ideally suited to perform these computationally demanding tasks. Enter the accelerator.


News Feed

UCSD: Compressed Data Technique Enables Pangenomics at Scale

Jan. 16, 2026 — Engineers at the University of California have developed a new data structure and compression technique that enables the field of pangenomics to handle unprecedented scales of genetic information. The team, led by UC San Diego electrical and computer engineering professor Yatish Turakhia, described their compressive pangenomics approach in Nature Genetics on Jan. […]

The post UCSD: Compressed Data Technique Enables Pangenomics at Scale appeared first on HPCwire.

Is Nvidia Assembling The Parts For Its Next Inference Platform?

No, we did not miss the fact that Nvidia did an “acquihire” of AI accelerator and system startup and rival Groq on Christmas Eve.

Is Nvidia Assembling The Parts For Its Next Inference Platform? was written by Timothy Prickett Morgan at The Next Platform.

TANGO/CoNGA@SC25: Dancing Toward More Sustainable Cyberinfrastructure

In early 2025, John Gustafson asked if STEM-Trek would consider collaborating with the Conference on Next-Generation Arithmetic (CoNGA) to join our annual pre-conference workshop ahead of the Supercomputing Conference, SC25. I enthusiastically said, “yes!” Since 2019, I’ve closely followed activity in Dr. Gustafson’s realm. It’s exciting that next-generation arithmetic is found to process artificial intelligence/Machine […]

The post TANGO/CoNGA@SC25: Dancing Toward More Sustainable Cyberinfrastructure appeared first on HPCwire.

Argonne, MIT Using Open-Source Code for Nuclear and Fusion Energy Research

The award winning OpenMC software package is helping researchers at Argonne National Laboratory and the Massachusetts Institute of Technology develop next-g nuclear and fusion ....

The post Argonne, MIT Using Open-Source Code for Nuclear and Fusion Energy Research appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

TSMC Has No Choice But To Trust The Sunny AI Forecasts Of Its Customers

If the GenAI expansion runs out of gas, Taiwan Semiconductor Manufacturing Co, the world’s most important foundry for advanced chippery, will be the first to know.

TSMC Has No Choice But To Trust The Sunny AI Forecasts Of Its Customers was written by Timothy Prickett Morgan at The Next Platform.

What I Saw at the Revolution

If it’s not too late to take stock of 2025 in HPC-AI, then holding up the Supercomputing Conference as a trends test site might be a good approach. I’m no perennial* but I’ve been to about half of the 37 SCs , including all 11 since 2015. And looking back at that year’s conference, SC15, […]

The post What I Saw at the Revolution appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

TOP500 News



The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The List

11/2025 Highlights

On the 66th edition of the TOP500 El Capitan remains No. 1 and JUPITER Booster becomes the fourth Exascale system.

The JUPITER Booster system at the EuroHPC / Jülich Supercomputing Centre in Germany at No. 4 submitted a new measurement of 1.000 Exflop/s on the HPL benchmark. It is the fourth Exascale system on the TOP500 and the first one outside of the USA.

El Capitan, Frontier, and Aurora are still leading the TOP500. All three are installed at DOE laboratories in the USA.

The El Capitan system at the Lawrence Livermore National Laboratory, California, USA remains the No. 1 system on the TOP500. The HPE Cray EX255a system was remeasured with 1.809 Exaflop/s on the HPL benchmark. LLNL also achieved 17.41 Petaflop/s on the HPCG benchmark which makes the system the No. 1 on this ranking as well.

El Capitan has 11,340,000 cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. It uses the Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 60.9 Gigaflops/watt.

read more »

List Statistics