Sponsored Article

The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The Evolution, Convergence and Cooling of AI & HPC Gear
Nov. 7, 2024

Years ago, when Artificial Intelligence (AI) began to emerge as a potential technology to be harnessed as a powerful tool to change the way the world works, organizations began to kick the AI tires by exploring it’s potential to enhance their research or business. However, to get started with AI, neural networks needed to be created, data sets trained, and microprocessors were needed that could perform matrix-multiplication calculations ideally suited to perform these computationally demanding tasks. Enter the accelerator.


News Feed

DOE Launches Genesis Mission Consortium to Advance AI-Driven Science

The Department of Energy (DOE) has launched the Genesis Mission Consortium – a public-private partnership focused on advancing AI-driven scientific discovery and innovation. This fits well within the broader scope of the Genesis Mission to harness AI and high-performance computing to accelerate scientific progress and strengthen U.S. leadership in advanced technologies. This latest move brings […]

The post DOE Launches Genesis Mission Consortium to Advance AI-Driven Science appeared first on HPCwire.

University of Michigan Explains Scope of Planned HPC Facility with Los Alamos

Feb. 11, 2026 — University of Michigan is partnering with Los Alamos National Laboratory to develop a high performance computational research facility that will soon provide resources needed for U-M and collaborators to tackle some of the most challenging problems society faces. The facility will immediately increase capacity for existing U-M research on topics ranging from […]

The post University of Michigan Explains Scope of Planned HPC Facility with Los Alamos appeared first on HPCwire.

Attending GTC? Join Us For An Exclusive Roundtable Dinner On AI Data Platforms

AI projects don’t fail because models don’t work or GPUs lack performance.

Attending GTC? Join Us For An Exclusive Roundtable Dinner On AI Data Platforms was written by Atul Chaudhary at The Next Platform.

Cisco Doubles Up The Switch Bandwidth To Take On AI Scale Up And Scale Out

In the modern AI datacenter – really, a data galaxy at this point because AI processing needs have broken well beyond the bounds of a single datacenter or even multiple datacenters in a region in a few extreme cases – has two pinch points in the network.

Cisco Doubles Up The Switch Bandwidth To Take On AI Scale Up And Scale Out was written by Timothy Prickett Morgan at The Next Platform.

IBM Introduces Autonomous Flash Storage with Agentic AI

IBM (NYSE: IBM) today unveiled a new generation of IBM FlashSystem, co-run by agentic AI, designed to support autonomous storage. IBM said the new products offer resilience through ....

The post IBM Introduces Autonomous Flash Storage with Agentic AI appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

DOE Launches Genesis Mission Consortium

The U.S. Department of Energy today announced the launch of the Genesis Mission Consortium, a public-private partnership advancing the Department’s Genesis Mission to harness artificial intelligence to support ....

The post DOE Launches Genesis Mission Consortium appeared first on Inside HPC & AI News | High-Performance Computing & Artificial Intelligence.

TOP500 News



The Influence of HPC-ers: Setting the Standard for What’s “Cool”
Jan. 16, 2025

A look back to supercomputing at the turn of the century

When I first attended the Supercomputing (SC) conferences back in the early 2000s as an IBMer working in High Performance Computing (HPC), it was obvious this conference was intended for serious computer science researchers and industries singularly focused on pushing the boundaries of computing. Linux was still in its infancy. I vividly remember having to re-compile kernels with newly released drivers every time there was a new server that came to market just so I could get the system to PXE boot over the network. But there was one …


The List

11/2025 Highlights

On the 66th edition of the TOP500 El Capitan remains No. 1 and JUPITER Booster becomes the fourth Exascale system.

The JUPITER Booster system at the EuroHPC / Jülich Supercomputing Centre in Germany at No. 4 submitted a new measurement of 1.000 Exflop/s on the HPL benchmark. It is the fourth Exascale system on the TOP500 and the first one outside of the USA.

El Capitan, Frontier, and Aurora are still leading the TOP500. All three are installed at DOE laboratories in the USA.

The El Capitan system at the Lawrence Livermore National Laboratory, California, USA remains the No. 1 system on the TOP500. The HPE Cray EX255a system was remeasured with 1.809 Exaflop/s on the HPL benchmark. LLNL also achieved 17.41 Petaflop/s on the HPCG benchmark which makes the system the No. 1 on this ranking as well.

El Capitan has 11,340,000 cores and is based on AMD 4th generation EPYC processors with 24 cores at 1.8 GHz and AMD Instinct MI300A accelerators. It uses the Cray Slingshot 11 network for data transfer and achieves an energy efficiency of 60.9 Gigaflops/watt.

read more »

List Statistics