According to Google, their artificial intelligence supercomputer TPU (Tensor Processing Unit) outperforms NVIDIA’s A100 chip in both speed and environmental friendliness.
Google has recently shared details about their AI model training supercomputers, claiming that their systems are quicker and more energy-efficient than comparable ones offered by NVIDIA. For more than 90% of its AI training operations, Google has created a specialized chip called the Tensor Processing Unit (TPU). The process of AI training involves inputting data through models to improve their performance in tasks such as generating images or responding to inquiries using natural language.
Google has recently published a research paper detailing the fourth generation of its Tensor Processing Unit (TPU) chip. Google has also described how it linked more than four thousand in-house chips together into a supercomputer by using its in-house optical switches to create links between various devices.
The competition to enhance connections between machines has become critical for companies manufacturing AI supercomputers. This is because large language models, the backbone of technologies like Google’s Bard or OpenAI’s ChatGPT, have grown substantially, rendering them too large to fit on a single chip.
According to Google, its supercomputers are designed to allow easy reconfiguration of connections between chips in real-time. This feature helps to avoid potential issues and enables performance improvements to be made quickly and efficiently.
See For Yourself How Google’s New TPU v4 Chips Stands Against NVIDIA’s A100 Chips
In The Image: The MLPerf Training v1.0 benchmark results show that Google’s TPU v4 outperforms all non-Google submissions regarding speed, including those from NVIDIA, irrespective of system size. The comparison is normalized based on overall training time, and the results indicate that higher bars represent better performance.
“Despite releasing information about its supercomputer only recently, Google has had the system up and running within the organization since 2020. The above comparison is proof of this because this comparison was done in July’21.”
Read Also: Google Quantum Supremacy: Explained
What Is Google Saying
Google has reported that, for systems of comparable size, its supercomputer outperforms a system based on NVIDIA’s A100 chip available in the same period as the fourth-generation TPU. According to Google, its supercomputer is 1.9 times more energy-efficient and up to 1.7 times quicker than the NVIDIA system built using NVIDIA’s A100 chips.
The company also said that it did not directly compare its 4th-Gen Tensor Processing Unit (TPU) with NVIDIA’s latest flagship H100 chip. This is due to the H100’s use of updated technology and its introduction after Google’s TPU.
For more of such latest tech news, listicles, troubleshooting guides, and tips & tricks related to Windows, Android, iOS, and macOS, follow us on Facebook, Instagram, Twitter, YouTube, and Pinterest.