
Nvidia is expediting the industry’s shift toward the next wave of data center technologies. The company intends to feature Co-Packaged Optics (CPO) in its Feynman architecture, slated for release in 2028, marking its initial deployment of this technology.
While CPO adoption was initially anticipated closer to the early 2030s, the massive escalation in artificial intelligence workloads has prompted Nvidia to revise its roadmap and significantly accelerate this timeline.
CPO (co-packaged optics) is designed to resolve a critical bottleneck plaguing contemporary AI data centers: constraints on data transmission speed and distance between accelerators, CPUs, and the wider network infrastructure.
Substituting conventional copper interconnects with optical solutions enables higher throughput, reduced latency, and easier scaling of clusters, particularly where node separation spans considerable distances.
According to insider information, the Feynman architecture will debut as Nvidia’s first design incorporating 3D die stacking—a crucial advancement for boosting chip density and overall performance. Intel is potentially positioned as a manufacturing and packaging collaborator, perhaps furnishing advanced technologies like EMIB.
Furthermore, Nvidia appears to be developing its proprietary high-speed memory solution, diverging from existing market standards. The company may employ a bespoke flavor of HBM, such as HBM4E or HBM5, to secure incremental performance gains.
In addition to the next-generation GPUs, Nvidia has also confirmed the nomenclature for its forthcoming server CPU: Rosa. This chip will succeed Vera and will be an integral component of the Feynman platform.