The AI Boom Drives Innovation in Chip Networking
The surge in demand for artificial intelligence is pushing the boundaries of data transfer technology, as conventional electronic interconnects struggle to keep pace with the bandwidth requirements of modern AI workloads.This has sparked a wave of innovation focused on accelerating data throughput within and between computer systems.
Historically, networking technology was a relatively stable field, primarily concerned with efficiently switching packets of data. However, the computationally intensive nature of AI has dramatically altered this landscape. “Now, because of AI, it’s having to move fairly robust workloads, and that’s why you’re seeing innovation around speed,” explains Ben Bajarin, CEO of the research firm Creative Strategies.
Nvidia recognized this shift early on, making strategic acquisitions to bolster its networking capabilities. In 2020, the company acquired Mellanox Technologies for nearly $7 billion, gaining access to high-speed networking solutions for servers and data centers. Shortly after, Nvidia also purchased Cumulus Networks, enhancing its Linux-based software system for computer networking, betting that clustered GPUs would become considerably more powerful within data center environments.
While Nvidia focuses on vertically integrated GPU systems, Broadcom has emerged as a key player in custom chip accelerators and high-speed networking. The company collaborates with major players like Google, Meta, and OpenAI, developing chips specifically for data centers. Broadcom is also a leader in silicon photonics and is preparing to launch the Thor Ultra networking chip,designed to optimize data flow between AI systems and the broader data center infrastructure,as reported by Reuters.
Further demonstrating the industry’s focus,ARM,a semiconductor design giant,recently announced its acquisition of DreamBig for $265 million.DreamBig specializes in AI chiplets – modular circuits designed for integration into larger systems – in partnership with Samsung. ARM CEO Rene Haas highlighted the acquisition’s importance for both “scale-up and scale-out networking,” referring to efficient data transfer within and between chip clusters and racks.
A especially innovative approach is being pioneered by Lightmatter, which is developing silicon photonics to link chips together. CEO Nick Harris notes that AI’s computing power demands are doubling every three months, exceeding the pace predicted by Moore’s Law. Lightmatter’s technology utilizes light-based interconnects,creating a 3D stack of silicon that the company claims represents the “world’s fastest photonic engine for AI chips.” The startup has secured over $500 million in funding, reaching a valuation of $4.4 billion in the past two years, reflecting investor confidence in this novel approach to overcoming data transfer bottlenecks.