Broadcom Inc on Tuesday released a new chip for wiring together supercomputers for artificial intelligence (AI) work using networking technology that is already in wide use.
Broadcom is a major supplier of chips for Ethernet switches, which are the primary way computers inside conventional data centres are connected to one another.
But the rise of AI applications such as OpenAI's ChatGPT and Alphabet Inc's Bard have presented new challenges for the networks inside data centres. In order to respond to questions with human-like answers, such systems must be trained using huge amounts of data.
That job is far too big for one computer chip to handle. Instead, the job must be split up over thousands of chips called graphics processing units (GPUs), which then have to function like one giant computer to work on the job for weeks or even months at a time. That makes the speed at which the individual chips can communicate important.
Also read: ChatGPT: What is OpenAI’s chatbot and what is it used for?
Broadcom on Tuesday announced a new chip, Jericho3-AI, which can connect up to 32,000 GPU chips together. The Jericho3-AI chip will compete with another supercomputer networking technology called InfiniBand.
The biggest maker for InfiniBand gear is now Nvidia Corp, which purchased InfiniBand leader Mellanox for $6.9 billion in 2019.
Nvidia is also the market leader in GPUs. While Nvidia-Mellanox systems are some of the fastest supercomputers in the world, many companies are reluctant to give up Ethernet, which is sold by a variety of companies, to buy both their GPUs and networking gear from the same supplier, said Ram Velaga, senior vice-president and general manager of the core switching group at Broadcom.
"Ethernet, you can get it from multiple vendors - there's a lot of competition," Velaga said. "If we don't come out with the best Ethernet switch, somebody else will. InfiniBand is a proprietary, single-source, vertically integrated kind of a solution."