
On Thursday, Intel and Google unveiled a multi-year collaboration where Google will continue to deploy Intel Xeon platforms for its forthcoming generation of AI and cloud computing infrastructure. These platforms will leverage not only upcoming Xeon processors from Intel but also custom Infrastructure Processing Units (IPUs) jointly developed by Intel and Google. This announcement comes amid an accelerating trend of adopting bespoke Arm-based processors for AI workloads. “Scaling AI requires more than just accelerators—it demands balanced systems. CPUs and IPUs are vital in delivering the performance, efficiency, and flexibility that modern AI workloads necessitate,” remarked Intel CEO Pat Gelsinger.
Google presently employs Intel Xeon 5 and Intel Xeon 6 processors for diverse applications, ranging from orchestrating large-scale AI training to latency-sensitive inference and general-purpose computing. For instance, Intel’s latest Xeon offerings power the C4 and N4 instances. While Google’s proprietary Axion processors, built on Armv9, grant the cloud giant enhanced command and cost-efficiency, numerous tasks running in Google’s data centers either must maintain x86 backward compatibility or simply require the peak single-thread performance offered by Intel Xeon chips. This dependency is projected to persist in the years ahead, solidifying the basis for this agreement.
In a push to make Intel Xeon platforms more potent and suitable for their hyperscale data centers, Google is also co-developing custom IPUs with Intel to offload networking, storage, and security functions from the main processors. Ultimately, the Intel Xeon platforms will integrate the x86 architecture’s high single-thread capability with custom infrastructure processing, thereby boosting their competitiveness within Google’s highly customized environments. “CPUs and infrastructure acceleration remain foundational to AI systems—from training orchestration through inference and deployment,” stated Amin Vahdat, Senior Vice President and Chief Technologist for AI Infrastructure at Google.
This disclosure arrives as hyperscalers and AI platform developers intensify the deployment of their own custom processors based on the Arm instruction set. Just last week, Counterpoint Research issued a note suggesting that 90% of AI servers running on custom silicon will rely on the Arm architecture, leaving x86 and RISC-V to account for roughly 10%. The Intel and Google statement clearly signals that Xeon processors, bolstered by custom IPUs, will remain central to AI and other demanding workloads for years to come, which was largely anticipated anyway.
Intel Xeon processors have powered cloud infrastructure since its inception in the 2000s, and served Google’s servers even before that; consequently, x86 in general, and Xeon specifically, are not departing Google’s data centers anytime soon. Nonetheless, this announcement emphatically affirms the continued relevance of Intel Xeon chips, and a declaration of this nature coming from Google—a company that has integrated specialized custom accelerators across nearly all its services for years—carries significant weight. “Intel has been a trusted partner for nearly two decades, and their Xeon roadmap assures us we can continue to meet the evolving demands for performance and efficiency in our workloads,” Vahdat added. Rutab.net