Sponsored Article

The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The Evolution, Convergence and Cooling of AI & HPC Gear
Nov. 7, 2024

Years ago, when Artificial Intelligence (AI) began to emerge as a potential technology to be harnessed as a powerful tool to change the way the world works, organizations began to kick the AI tires by exploring it’s potential to enhance their research or business. However, to get started with AI, neural networks needed to be created, data sets trained, and microprocessors were needed that could perform matrix-multiplication calculations ideally suited to perform these computationally demanding tasks. Enter the accelerator.


News Feed

Oracle’s Financing Primes The OpenAI Pump

Software giant Oracle has a vast installed base of enterprise customers that it has agglomerated over the decades that gives it the cash flow to do many things.

Oracle’s Financing Primes The OpenAI Pump was written by Timothy Prickett Morgan at The Next Platform.

Brookhaven Unveils AI-Based Approach for Managing High-Volume Particle Physics Data

UPTON, N.Y., Feb. 2, 2026 — Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have developed a novel artificial intelligence (AI)-based method to dramatically tame the flood of data generated by particle detectors at modern accelerators. The new custom-built algorithm uses a neural network to intelligently compress collision data, adapting automatically to […]

The post Brookhaven Unveils AI-Based Approach for Managing High-Volume Particle Physics Data appeared first on HPCwire.

UW-Eau Claire’s HPC Center Fuels Student and Faculty AI Research

Feb. 2, 2026 — Fifty years after Seymour Cray unveiled his Cray-1 supercomputer in Chippewa Falls, technology advancements at the University of Wisconsin-Eau Claire are creating more extraordinary opportunities for students and faculty to conduct deep research using artificial intelligence. From improved drug screening to faster cancer detection and improved crop yields, Blugolds are producing results […]

The post UW-Eau Claire’s HPC Center Fuels Student and Faculty AI Research appeared first on HPCwire.

HPC News Bytes 20260202: Microsoft’s Inference Chip, H200 (Not H20) GPUs for China, Mega AI Data Center Deals

A happy month of February to you! The big players have dominated the HPC-AI news front of late, here’s a fast (8:44) recap of recent developments, including: Microsoft Maia 200 chip for AI inference, Nvidia H200 (not H20) ....

The post HPC News Bytes 20260202: Microsoft’s Inference Chip, H200 (Not H20) GPUs for China, Mega AI Data Center Deals appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

Gartner Takes Another Stab At Forecasting AI Spending

The market researchers at Gartner have extended their forecast out to 2027 and dropped 2024 from the view since it is now more than a year past.

Gartner Takes Another Stab At Forecasting AI Spending was written by Timothy Prickett Morgan at The Next Platform.

Report: AI Scale Pushing Enterprise Infrastructure toward Failure

NEW YORK, Jan. 29, 2026 — Cockroach Labs, a cloud-agnostic distributed SQL databases with CockroachDB, today announced findings from its second annual survey, “The State of AI Infrastructure 2026: Can Systems Withstand AI Scale?” The report reveals a growing concern that AI use is starting to overwhelm the traditional IT systems meant to support it. As […]

The post Report: AI Scale Pushing Enterprise Infrastructure toward Failure appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

TOP500 News



The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The List

11/2025 Highlights

On the 66th edition of the TOP500 El Capitan remains No. 1 and JUPITER Booster becomes the fourth Exascale system.

The JUPITER Booster system at the EuroHPC / Jülich Supercomputing Centre in Germany at No. 4 submitted a new measurement of 1.000 Exflop/s on the HPL benchmark. It is the fourth Exascale system on the TOP500 and the first one outside of the USA.

El Capitan, Frontier, and Aurora are still leading the TOP500. All three are installed at DOE laboratories in the USA.

The El Capitan system at the Lawrence Livermore National Laboratory, California, USA remains the No. 1 system on the TOP500. The HPE Cray EX255a system was remeasured with 1.809 Exaflop/s on the HPL benchmark. LLNL also achieved 17.41 Petaflop/s on the HPCG benchmark which makes the system the No. 1 on this ranking as well.

El Capitan has 11,340,000 cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. It uses the Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 60.9 Gigaflops/watt.

read more »

List Statistics