FRANKFURT, Germany; BERKELEY, Calif.; and KNOXVILLE, Tenn.—The 55th edition of the TOP500 saw some significant additions to the list, spearheaded by a new number one system from Japan. The latest rankings also reflect a steady growth in aggregate performance and power efficiency.
Data centers host many users and applications and have become the competitive advantage for research organization and manufacturing companies. Keeping the data center intact and healthy is critical, as operational costs of supercomputers continue to rise, driven by growing scientific computing demands and new security threats. Moreover, malicious users may exploit data center access to misuse compute resources by running prohibited applications, such as crypto currency mining, resulting in unexpected downtimes and higher operating costs. This week, NVIDIA unveiled the Unified Fabric Manager (UFM) Cyber-AI platform, which minimizes downtime and saves OPEX in InfiniBand data centers by harnessing AI-powered analytics …
July 8, 2020 — The Practice and Experience in Advanced Research Computing (PEARC) Conference seeks nominations for individuals to serve in the role of PEARC Steering Committee Members-at-Large and PEARC 2022 Conference General Chair. Nominations will be accepted from June 15 through July 12, 2020, at 11:59 pm Central Time. Nominations can be sent to […]
STONY BROOK, NY, July 8, 2020 –The College of Engineering and Applied Science (CEAS) at Stony Brook University announced it has received an $1.1 million award from the National Offshore Wind Research and Development Consortium (NOWRDC). Fotis Sotiropoulos, Dean of the College of Engineering and Applied Sciences at Stony Brook University, is the lead principle […]
After more than three decades in supercomputing as a strategic marketing and communications executive, Mike Bernhardt has seen the HPC community evolve through the many phases of its existence. A “Perennial” (see below) at the annual SC industry conference, Bernhardt remains fascinated by the connection between leading-edge computation and scientific discovery. “In many ways, it’s […]
Google today introduced the Accelerator-Optimized VM (A2) instance family on Google Compute Engine based on the NVIDIA Ampere A100 Tensor Core GPU, launched in mid-May. Available in alpha and with up to 16 GPUs, A2 VMs are the first A100-based offering in a public cloud, according to Google. At its launch, Nvidia said the A100, built on the company’s new Ampere architecture, delivers “the greatest generational leap ever,” according to Nvidia, enhancing training and inference computing performance by 20x over its predecessors.
The new top system, Fugaku, turned in a High Performance Linpack (HPL) result of 415.5 petaflops, besting the now second-place Summit system by a factor of 2.8x. Fugaku, is powered by Fujitsu’s 48-core A64FX SoC, becoming the first number one system on the list to be powered by ARM processors. In single or further reduced precision, which are often used in machine learning and AI applications, Fugaku’s peak performance is over 1,000 petaflops (1 exaflops). The new system is installed at RIKEN Center for Computational Science (R-CCS) in Kobe, Japan.
The most energy-efficient system on the Green500 is the MN-3, based on a new server from Preferred Networks. It achieved a record 21.1 gigaflops/watt during its 1.62 petaflops performance run. The system derives its superior power efficiency from the MN-Core chip, an accelerator optimized for matrix arithmetic. It is ranked number 395 in the TOP500 list.
In second position is the new NVIDIA Selene supercomputer, a DGX A100 SuperPOD powered by the new A100 GPUs. It occupies position seven on the TOP500.