Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Nvidia and Foxconn are building Taiwan’s largest supercomputer using Nvidia Blackwell chips.
The project, Hon Hai Kaohsiung Super Computing Center, revealed Tuesday at Hon Hai Tech Day, will be built around Nvidia’s Blackwell graphics processing unit (GPU) architecture and feature the GB200 NVL72 platform, which includes a total of 64 racks and 4,608 Tensor Core GPUs.
With an expected performance of over 90 exaflops of AI performance, the machine would easily be considered the fastest in Taiwan.
Foxconn plans to use the supercomputer, once operational, to power breakthroughs in cancer research, large language model development and smart city innovations, positioning Taiwan as a global leader in AI-driven industries.
Foxconn’s “three-platform strategy” focuses on smart manufacturing, smart cities and electric vehicles. The new supercomputer will play a pivotal role in supporting Foxconn’s ongoing efforts in digital twins, robotic automation and smart urban infrastructure, bringing AI-assisted services to urban areas like Kaohsiung.
Construction has started on the new supercomputer housed in Kaohsiung, Taiwan. The first phase is expected to be operational by mid-2025. Full deployment is targeted for 2026.
The project will integrate with Nvidia technologies, such as Nvidia Omniverse and Isaac robotics platforms for AI and digital twins technologies to help transform manufacturing processes.
“Powered by Nvidia’s Blackwell platform, Foxconn’s new AI supercomputer is one of the most powerful in the world, representing a significant leap forward in AI computing and efficiency,” said Foxconn vice president James Wu, in a statement.
The GB200 NVL72 is a state-of-the-art data center platform optimized for AI and accelerated computing.
Each rack features 36 Nvidia Grace CPUs and 72 Nvidia Blackwell GPUs connected via Nvidia’s NVLink technology, delivering 130TB/s of bandwidth.
Nvidia NVLink Switch allows the 72-GPU system to function as a single, unified GPU. This makes it ideal for training large AI models and executing complex inference tasks in real time on trillion-parameter models.
Taiwan-based Foxconn, officially known as Hon Hai Precision Industry Co., is the world’s largest electronics manufacturer, known for producing a wide range of products, from smartphones to servers, for the world’s top technology brands. Foxconn is building digital twins of its factories using Nvidia Omniverse, and Foxconn was also one of the first companies to use Nvidia NIM microservices inthe development of domain-specific large language models, or LLMs, embedded into a variety of internal systems and processes in its AI factories for smart manufacturing, smart electric vehicles and smart cities.
Be the first to comment