Multibillion-Dollar Data Centers Are Dominating the Globe

When Sam Altman remarked a year ago that OpenAI’s Roman Empire is akin to the actual Roman Empire, he meant it. Just as the Romans gradually built an empire that spanned three continents and one-ninth of the Earth’s circumference, Altman and his team are now establishing their own vast estates—not agricultural, but AI data centers.
Tech leaders like Altman, Nvidia CEO Jensen Huang, Microsoft CEO Satya Nadella, and Oracle co-founder Larry Ellison are fully committed to the notion that these new facilities, stocked with IT infrastructure, represent the future of the American (and likely global) economy. However, data centers aren’t a new concept. In the early computing days, enormous, power-hungry mainframes were housed in climate-controlled environments, linked to terminal computers via co-ax cables. The consumer internet boom of the late 1990s initiated a fresh era of infrastructure, with enormous buildings emerging in the vicinity of Washington, DC, filled with racks of computers designed to store and process data for tech enterprises.
By the next decade, “the cloud” had become the more flexible infrastructure of the internet. Storage costs began to decline, with companies like Amazon capitalizing on these developments. While large data centers continued to grow, tech companies shifted from a mix of on-site servers and rented data center racks to utilizing multiple virtualized environments. (“What is the cloud?” a fairly savvy family member asked me in the mid-2010s, “and why am I paying for 17 different subscriptions to it?”)
Meanwhile, tech firms were accumulating vast amounts of data—data that users willingly shared online, in workplace environments, and through mobile applications. Companies started uncovering innovative ways to analyze and organize this “Big Data,” claiming it would transform lives. In many respects, it did, and it was clear where this was heading.
Currently, the tech sector is experiencing a surge of interest in generative AI, necessitating new levels of computing power. Big Data is becoming mundane; large data centers are now specifically tailored for AI. To support these AI data centers, faster, more efficient chips are essential, and chipmakers like Nvidia and AMD have been enthusiastically expressing their commitment to AI. The industry has entered an extraordinary era of capital investment in AI infrastructure, pushing the US toward positive GDP growth. These substantial, dynamic deals resemble casual cocktail party negotiations, fueled by gigawatts and enthusiasm, as the rest of us attempt to keep track of the actual contracts and finances.
OpenAI, Microsoft, Nvidia, Oracle, and SoftBank have secured some of the largest agreements. This year, an earlier supercomputing initiative between OpenAI and Microsoft, known as Stargate, evolved into a significant AI infrastructure endeavor in the US. (President Donald Trump referred to it as the largest AI infrastructure project in history—something that may not be entirely exaggerated.) Altman, Ellison, and SoftBank CEO Masayoshi Son all participated in the deal, initially committing $100 billion, with plans to invest up to $500 billion into Stargate in the years ahead. Nvidia GPUs would be utilized. Later, in July, OpenAI and Oracle revealed an additional partnership related to Stargate—SoftBank notably absent—focused on gigawatts of capacity (4.5) and anticipated job creation (around 100,000).
