Fugaku Holds Top Spot, Exascale Remains Elusive
June 28, 2021

FRANKFURT, Germany; BERKELEY, Calif.; and KNOXVILLE, Tenn.— The 57 th edition of the TOP500 saw little change in the Top10. The only new entry in the Top10 is the Perlmutter system at NERSC at the DOE Lawrence Berkeley National Laboratory. The machine is based on the HPE Cray "Shasta" platform and a heterogeneous system with both GPU-accelerated and CPU-only nodes. Perlmutter achieved 64.6 Pflop/s, putting the supercomputer at No. 5 in the new list.


TOP500 News

GREEN500: Trend of steady progress with no big step toward newer technologies.
June 28, 2021

Although there was a trend of steady progress in the Green500, nothing has indicated a big step toward newer technologies.

The system to snag the No. 1 spot for the Green500 was MN-3 from Preferred Networks in Japan. Knocked from the top of the last list by NVIDIA DGX SuperPOD in the US, MN-3 is back to reclaim its crown. This system relies on the MN-Core chip, an accelerator optimized for matrix arithmetic, as well as a Xeon Platinum 8260M processor. MN-3 achieved a 29.70 gigaflops/watt power-efficiency and has a TOP500 ranking of 337.



News Feed

CRA: Best Practices on Using the Cloud for Computing Research

Nov. 3, 2021 — In July 2021, the Computing Research Association’s (CRA) newly formed Industry Committee (CRA-I) launched a series of 75-minute virtual roundtables to initiate discussion on the various areas of interest of CRA-I’s computing research industry partners. The mission of CRA-I is to convene industry partners on computing research topics of mutual interest and connect those partners […]

The post CRA: Best Practices on Using the Cloud for Computing Research appeared first on HPCwire.

Preparing for Aurora: Bringing Quantum Materials Simulation Code to Exascale Machines

Nov. 3, 2021 — As part of a series aimed at sharing best practices in preparing applications for Aurora, Argonne National Laboratory is highlighting researchers’ efforts to optimize codes to run efficiently on graphics processing units. Quantum Monte Carlo (QMC) methods are ideal candidates for the next generation of material-design tools, which target not only […]

The post Preparing for Aurora: Bringing Quantum Materials Simulation Code to Exascale Machines appeared first on HPCwire.

SciDAC: DOE Funding $30M Opportunity Using HPC for High Energy Physics Research

Today, the U.S. Department of Energy (DOE) announced $30 million for research in computation and simulation techniques and tools “to understand the universe via collaborations that enable effective use of DOE high performance computers.” Scientific Discovery through Advanced Computing (SciDAC) brings together researchers in areas of science and energy with experts in software development, applied mathematics, […]

The post SciDAC: DOE Funding $30M Opportunity Using HPC for High Energy Physics Research appeared first on insideHPC.

The Microcosm Of Global HPC In The Lone Star State

The HPC community spends a lot of time tracking the development of and production use of the flagship machines deployed by the major national and academic labs of the world.

The Microcosm Of Global HPC In The Lone Star State was written by Timothy Prickett Morgan at The Next Platform.

MIT: Forcing ML Models to Avoid Shortcuts (and Use More Data) for Better Predictions

CAMBRIDGE, Mass. — If your Uber driver takes a shortcut, you might get to your destination faster. But if a machine learning model takes a shortcut, it might fail in unexpected ways. In machine learning, a shortcut solution occurs when the model relies on a simple characteristic of a dataset to make a decision, rather […]

The post MIT: Forcing ML Models to Avoid Shortcuts (and Use More Data) for Better Predictions appeared first on insideHPC.

Startup Rips The Switch Out Of High Performance Networks

The rapid movement of data to the cloud, the sharp rise in the amount of east-west traffic and the broadening adoption of modern applications like artificial intelligence (AI) and machine learning are putting stress on traditional networking infrastructures that were designed for a different era and are struggling to meet the demands for better performance, more bandwidth and less latency.

Startup Rips The Switch Out Of High Performance Networks was written by Jeffrey Burt at The Next Platform.

Sponsored Article

PRACE Software Strategy for European Exascale Systems
Sept. 1, 2021

Building on the successful implementation of the Partnership for Advanced Computing in Europe (PRACE), the European Commission (EC) has increased its efforts to develop a world-class supercomputing ecosystem in Europe. The EC, EuroHPC Joint Undertaking (JU) and EU Member States have made significant investments in European petascale and pre-exascale infrastructure, have put exascale supercomputers on the roadmap, and are actively exploring new post-exascale architectures. The return on investment will be directly linked to the productivity of end-users in academia, in industry, and in the public sector. Key to this productivity is an ecosystem of user-oriented software: scientific applications and workflows …


The List

06/2021 Highlights

The only new entry in the Top10 is the Perlmutter system at NERSC at the DOE Lawrence Berkeley National Laboratory. It is based on the HPE Cray “Shasta” platform and a heterogeneous system with both GPU-accelerated and CPU-only nodes. Perlmutter achieved 64.6 Pflop/s which put it at No. 5 in the new list.

Supercomputer Fugaku, a system based on Fujitsu’s custom ARM A64FX processor remains No. 1. It is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan, the location of the former K-Computer. It was co-developed in close partnership by Riken and Fujitsu and uses Fujitsu’s Tofu D interconnect to transfer data between nodes. Its HPL benchmark score to 442 Pflop/s easily exceeding the No. 2 Summit by 3x. In single or further reduced precision, which are often used in machine learning and AI applications, it’s peak performance is actually above 1,000 PFlop/s (= 1 Exaflop/s) and because of this, it is often introduced as the first ‘Exascale’ supercomputer. Fugaku actually already demonstrated this new level of performance on the new HPL-AI benchmark with 2 Exaflops! https://www.r-ccs.riken.jp/en/

read more »

List Statistics