The Sun Microsystems-built Tsubame 1.0 began operation in 2006 achieving 85 TFLOPS of performance, it was the most powerful supercomputer in Japan and Asia at the time.[1][2][3] The system consisted of 655 InfiniBand connected nodes, each with a 8 dual-core AMDOpteron 880 and 885 CPUs and 32 GB of memory.[4][5] Tsubame 1.0 also included 600 ClearSpeed X620 Advance cards.[6]
Tsubame 1.2
In 2008, Tsubame was upgraded with 170 NvidiaTesla S1070 server racks, adding at total of 680 Tesla T10 GPU processors for GPGPU computing.[1] This increased performance to 170 TFLOPS, making it at the time the second most powerful supercomputer in Japan and 29th in the world.
Tsubame 2.0
Tsubame 2.0 was built in 2010 by HP and NEC as a replacement to Tsubame 1.0.[2][7] With a peak of 2,288 TFLOPS, in June 2011 it was ranked 5th in the world.[8][9] It has 1,400 nodes using six-core Xeon 5600 and eight-core Xeon 7500 processors. The system also included 4,200 of Nvidia Tesla M2050 GPGPU compute modules. In total the system had 80.6 TB of DRAM, in addition to 12.7 TB of GDDR memory on the GPU devices.[10] Tsubame 2.0 ranked 4th in the world and 1st in Japan on the November 2010 TOP500 list, the highest ranking until then.[3]
Tsubame 2.5
Tsubame 2.0 was further upgrade to 2.5 in 2014, replacing all of the Nvidia M2050 GPGPU compute modules with Nvidia Tesla Kepler K20x compute modules.[11][12] This yielded 17.1 PFLOPS of single precision performance.
Tsubame-KFC
Tsubame KFC added oil based liquid cooling to reduce power consumption.[13][14] This allowed the system to achieve world's best performance efficiencies of 4.5 gigaflops/watt.[15][16][17]
Tsubame 3.0
In February 2017, Tokyo Institute of Technology announced it would add a new system Tsubame 3.0.[18][19] It was developed with SGI and is focused on artificial intelligence and targeting 12.2 PFLOPS of double precision performance. The design is reported to utilize 2,160 Nvidia Tesla P100 GPGPU modules, in addition to Intel Xeon E5-2680 v4 processors.
Tsubame 3.0 ranked 13th at 8125 TFLOPS on the November 2017 list of the TOP500 supercomputer ranking.[20] It ranked 1st on the June 2017 list of the Green500 energy efficiency ranking at 14.110 GFLOPS/watts.[21]
Tsubame 4.0
Tsubame 4.0 became operational in June 2024.[22] It was developed with HP, and consists of 240 nodes each with 2 AMD EPYC 9654 96 core processors, 768GiB of DDR5-4800 RAM, and 4 Nvidia H-100 GPUs. Double precision performance is ~5.5x higher than that of Tsubame 3.0 at 66.8 PFLOPS.[23][22] On the November 2024 TOP500 list, Tsubame 4.0 ranked 36th in the world and 5th in Japan.[24] Tsubame 4.0 is the first in the series to be operated by the Institute of Science Tokyo after its founding in October 2024.