Unchallenged leader remains the DOE's IBM BlueGene/L system, installed at DOE’s Lawrence Livermore National Laboratory (LLNL). With 280.6 TFlop/s on the Linpack benchmark it is still the only system ever to exceed the 100 TFlop/s mark.
Changes in the TOP10 showed three interesting newcomers, all outside the U.S., and one system upgrade.
The No. 3 ASC Purple system at LLNL, also built by IBM but based on the pSeries 575 servers, was slightly upgraded and reaches now 75.76 TFlop/s.
The largest system in Europe is the new No. 5 at the Commissariat a l'Energie Atomique (CEA). It is an Itanium-based NovaScale 5160 system build by the French company Bull with 8704 processors and a Quadrics interconnect.
No. 7 is now occupied by the largest system in Japan, a cluster integrated by NEC based on Sun Fire x64 with Opteron processors and an InfiniBand interconnect, installed at the Tokyo Institute of Technology.
For the first time in the history of the TOP500 project (since 1993), the top Japanese system is not manufactured in Japan itself.
The German Forschungszentrum Juelich (FZJ) got to No. 8 with its new BlueGene system, which is now the second largest system in Europe. It is the largest BlueGene system outside the U.S. and the third largest in general.
The Earth Simulator, built by NEC, which held the No. 1 spot for five lists, has now slipped to No. 10 and will almost certainly be displaced from the TOP10 by November.
The entry level to the list moved up to the 2.026 TFlop/s mark on the Linpack benchmark, compared to 1.646 TFlop/s six months ago.
The last system on the newest list was listed at position 341 in the last TOP500 just six months ago. This is a medium turnover rate for the TOP500.
Total accumulated performance has grown to 2.79 PFlop/s, compared to 2.30 PFlop/s six months ago and 1.69 PFlop/s one year ago.
The entry point for the top 100 increased in six months from 3.98 TFlop/s to 4.71 TFlop/s.
A total of 301 systems are now using Intel processors, which is down a little from 333 systems six months ago.
Intel’s EM64T processors are well received in the HPC community and increased their share from 81 systems to 118 systems.
The number of Itanium 2-based systems dropped further from 46 to 36.
The second most common processor family is the IBM Power processor (84 systems).
Already, 26 IBM systems are based on the embedded PowerPC 440 chip (25 BlueGene and 1 QCDOC)
Close behind IBM is the AMD Opteron processor with a strong growth rate (81 systems up from 55 six months ago and 25 a year ago).
Already 26 Opteron-based systems use the new dual core chip versions.
Only 8 classic vector systems are still on the list, 4 from Cray and NEC each.
365 systems are labeled as clusters, making this the most common architecture in the TOP500.
Half of the listed systems (255) are using Gigabit Ethernet as the internal system interconnect technology, ahead of Myricom’s Myrinet with 87 systems.
At present, IBM and Hewlett-Packard sell the bulk of systems at all performance levels of the TOP500.
IBM remains the clear leader in the TOP500 list with 48.6 percent of systems (up from 43.8 percent) and 54.3 percent of installed performance (up from 52.8 percent).
HP is second with 30.8 percent of systems (down from 33.8 percent) and 17.5 percent of performance (down from 18.8 percent).
No other manufacturer is able to capture more than 5 percent in any category.
The U.S. is clearly the leading consumer of HPC systems with 298 of the 500 systems. The European share is slightly decreasing from 100 to 83 systems while the Asian share is increasing from 66 to 93 systems, which puts it ahead of Europe once again.
The number of systems installed in the U.S. varies only slightly with now 298 systems, 305 six months ago, and 294 one year ago.
Dominant countries in Asia are Japan with 29 systems (up from 21) and China with 28 systems (up from 17).
In Europe, UK decreases from 41 to 35 systems, while Germany declines further from 24 to only 18 systems. One year ago Germany was in the lead with 40 compared to the UK’s 32 systems.
All changes are from November 2005 to June 2006.
|1||Titan - Cray XK7 , Opteron 6274 16C 2.200GHz, Cray Gemini interconnect, NVIDIA K20x|
|2||Sequoia - BlueGene/Q, Power BQC 16C 1.60 GHz, Custom|
|3||K computer, SPARC64 VIIIfx 2.0GHz, Tofu interconnect|
|4||Mira - BlueGene/Q, Power BQC 16C 1.60GHz, Custom|
|5||JUQUEEN - BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect|
|6||SuperMUC - iDataPlex DX360M4, Xeon E5-2680 8C 2.70GHz, Infiniband FDR|
|7||Stampede - PowerEdge C8220, Xeon E5-2680 8C 2.700GHz, Infiniband FDR, Intel Xeon Phi|
|8||Tianhe-1A - NUDT YH MPP, Xeon X5670 6C 2.93 GHz, NVIDIA 2050|
|9||Fermi - BlueGene/Q, Power BQC 16C 1.60GHz, Custom|
|10||DARPA Trial Subset - Power 775, POWER7 8C 3.836GHz, Custom Interconnect|