Semiconductor firm Marvell Technology, Inc. (NASDAQ: MRVL) has enters into a strategic collaboration with AI computing firm NVIDIA to deliver customized solutions for next-generation artificial intelligence infrastructure. The partnership integrates Marvell’s custom cloud platform silicon with NVIDIA’s new NVLink Fusion technology, aimed at empowering hyperscale cloud providers to build highly scalable and flexible AI data centers.
NVIDIA’s NVLink Fusion is a cutting-edge interconnect solution designed to integrate custom accelerator (XPU) silicon with NVIDIA’s networking and rack-scale architecture. The core of NVLink Fusion is a high-performance chiplet that enables up to 1.8 terabytes per second of bidirectional bandwidth, allowing rapid data movement between custom chips and NVIDIA’s ecosystem of AI infrastructure.
By aligning with Marvell’s custom platform strategy, which emphasizes co-design with hyperscalers and leverages advanced process technologies, the collaboration offers a compelling option for companies seeking to optimize AI infrastructure for large-scale model training and inference tasks—particularly in emerging domains like agentic AI, where outputs are driven by reasoning as well as learned data.
“Marvell and NVIDIA are working together to advance AI factory integration,” said Nick Kucharewski, Senior Vice President and General Manager of Marvell’s Cloud Platform Business Unit. “Through this collaboration, we offer customers the flexibility to rapidly deploy scalable AI infrastructure with the bandwidth, performance, and reliability required to support advanced AI models.”
Marvell’s custom silicon capabilities span a vast range of technologies critical for high-performance computing. These include electrical and optical SerDes, advanced packaging, die-to-die interconnects, silicon photonics, system-on-chip fabrics, PCIe Gen 7 interfaces, and co-packaged optics—all of which can now be tailored to work seamlessly with NVIDIA’s NVLink Fusion system.
Shar Narasimhan, Director of Accelerated Computing at NVIDIA, underscored the broader implications of the collaboration: “The computing landscape is being reshaped as AI is no longer just an application—it’s becoming foundational to modern data centers. NVLink Fusion extends NVIDIA’s open platform to partners like Marvell, enabling hyperscalers to scale out AI factories to millions of GPUs, using custom silicon and our end-to-end networking.”
The joint offering enables cloud providers to integrate proprietary accelerators into NVIDIA’s robust AI ecosystem while preserving their unique architectural investments. This reduces time to deployment and enhances performance optimization at scale—an increasingly critical factor as generative AI and autonomous systems continue to demand massive compute and interconnect capabilities.