
Above the tech industry looms a huge query: how long will its extensive outlays for AI infrastructure truly endure?
Tech titans are expending hundreds of billions of dollars on artificial intelligence infrastructure—primarily for data centers and the chips powering them. This is an outlay they claim will establish the foundation for AI to completely reshape our economy, jobs, and even personal interactions.
This year alone, technology firms are projected to commit \$400 billion in AI-related capital expenditures.
A portion of this is almost certainly creating a recurring burden on company balance sheets. And for AI-dependent firms, the question of how often they must upgrade or swap out advanced chips is vital—especially given rising doubts about whether AI will yield returns large or quick enough to cover both existing outlays and future infrastructure costs.
This fuels worries about an AI bubble—concerns that the enthusiasm and spending on AI do not align with its actual worth. These apprehensions surface while the “Magnificent Seven” tech stocks account for about 35% of the S&P 500’s value, raising questions about what an AI downturn might signify for the economy.
“The degree to which all this accumulation turns into a bubble partly hinges on the service life of these investments,” observed Tim DeStefano, an associate professor at Georgetown’s McDonough School of Business.
Chip Life Cycles
It remains uncertain how long top-tier graphics processing units (GPUs)—most frequently employed for training and processing AI—will stay functional.
Several tech specialists told CNN they estimate AI chips can be utilized for training large language models for a period spanning 18 months to three years. But the chips might continue serving less demanding functions for several more years, they added.
In contrast, central processing units (CPUs) used in conventional data centers lacking AI are typically replaced every five to seven years, experts noted.
This is partly because training AI models subjects chips to substantial strain and heat, causing them to degrade quicker. Roughly 9% of GPUs fail within a year, compared to about 5% of CPUs, according to David Bader, a Data Science professor at the New Jersey Institute of Technology.
Subsequent generations of AI chips also rapidly advance and become more efficient, meaning it might not be financially sensible to keep running AI tasks on older chips, even if they operate.
Meanwhile, Bader estimates GPUs can be used for training AI models for 18–24 months. However, he noted older chips can still handle tasks such as processing AI queries—known as inference—for about five years, extending their value.
Nvidia, the leading provider of AI chips, states its CUDA software ecosystem allows clients to update the software of existing chips, potentially delaying the need to upgrade to the absolute newest product.
Nvidia CFO Colette Kress mentioned in the company’s latest earnings call last month that GPUs “shipped six years ago are still running at full utilization” thanks to its CUDA system.
But regardless of whether the chips last two years or six, tech companies still face the same dilemma: “Where does the revenue come from that allows you to rebuild at this scale?” questioned Mihir Kshirsagar, director of the technology policy clinic at Princeton’s Center for Information Technology Policy.
What Does This Imply for Artificial Intelligence?
The faster chips deteriorate, the more pressure companies will feel to generate AI returns to fund their replacement.
Long-term demand for AI remains unclear, especially given reports this year that most companies adopting the technology have yet to see benefits in their financial reports. Corporate clients are expected to be major revenue streams for AI firms, but these companies are still figuring out how to employ the technology to generate income or cut costs, DeStefano said.
“There is demand for generative AI from individual users… but it’s not enough for the large AI companies to recoup their investment expenditures,” he stated.
Michael Burry, the noted investor behind “The Big Short,” recently cautioned about an artificial intelligence bubble. His argument partly rests on the prediction that tech firms are overvaluing the useful lifespan of their chip investments, which could ultimately affect their profitability.
AI leaders are also beginning to address this issue more openly.
Microsoft CEO Satya Nadella stated in a podcast interview last month that the company has begun staggering infrastructure outlays so that its data center chips do not become obsolete simultaneously.
Heat-exchange fans help keep computer equipment cool at the Microsoft data center in Mount Pleasant, Wisconsin, on September 18, 2025. Mike De Sisti/Milwaukee Journal Sentinel/USA Today Network/Imagn Images
And OpenAI CFO Sarah Friar expressed concern last month, saying the company’s role as a leading AI model producer depends on whether the most advanced chips last “three years, four, five years, or even longer.”
If this lifecycle is shorter, she suggested that the company might require the U.S. government to “backstop” the debt it is acquiring to fund its aggressive infrastructure commitments. (OpenAI swiftly attempted to walk back the comment, stating it is not seeking a government backstop.)
In prior market bubbles, infrastructure built during a hype cycle that remained dormant after the burst was still usable years later. For instance, the fiber-optic cables laid during the late 1990s dot-com bubble now form the backbone of the modern internet.
But the AI bubble—if real—will be a different scenario, said Paul Kedrosky, managing partner at investment firm SK Ventures. He argued that AI-based data centers will not retain the same utility over time without continuous investment in newer chips. And the ramifications could extend far beyond the balance sheets and stock values of tech giants.
“We are not only constructing these data centers, but [tech firms] are endeavoring to build power plants to support it all,” Kshirsagar remarked. “If the economics don’t materialize, very serious public questions will emerge.”