The IBM ASCI White system located at Lawrence Livermore National Laboratory took the No. 1 position in November 2000 with 4.9 teraflop/s Linpack performance. This system was built with 512 nodes, each of which contained 16 IBM Power3 processors using a shared memory. This type of hierarchical architecture was becoming more and more common for systems used in HPC.
By June 2001, Linpack performance on ASCI White had improved to 7.2 teraflop/s, keeping it in the No. 1 position for two more lists.
Located in a classified area at Lawrence Livermore National Laboratory, ASCI White was housed in over two hundred cabinets and covered a space the size of two basketball courts and weighed 106 tons. It contained six trillion bytes (TB) of memory and had more than 160 TB of IBM TotalStorage 7133 Serial Disk System capacity.
|1||Titan - Cray XK7 , Opteron 6274 16C 2.200GHz, Cray Gemini interconnect, NVIDIA K20x|
|2||Sequoia - BlueGene/Q, Power BQC 16C 1.60 GHz, Custom|
|3||K computer, SPARC64 VIIIfx 2.0GHz, Tofu interconnect|
|4||Mira - BlueGene/Q, Power BQC 16C 1.60GHz, Custom|
|5||JUQUEEN - BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect|
|6||SuperMUC - iDataPlex DX360M4, Xeon E5-2680 8C 2.70GHz, Infiniband FDR|
|7||Stampede - PowerEdge C8220, Xeon E5-2680 8C 2.700GHz, Infiniband FDR, Intel Xeon Phi|
|8||Tianhe-1A - NUDT YH MPP, Xeon X5670 6C 2.93 GHz, NVIDIA 2050|
|9||Fermi - BlueGene/Q, Power BQC 16C 1.60GHz, Custom|
|10||DARPA Trial Subset - Power 775, POWER7 8C 3.836GHz, Custom Interconnect|