Crews in Abilene, Texas, are turning a 875-acre site of red clay into a data-center campus that will draw 1.2 GW, enough to conservatively support the load of 750,000 homes, once all eight “AI factory” halls are online, part of Stargate’s headline-grabbing $500 billion build-out.
Bloomberg’s hard-hat tour captured Crusoe CEO Chase Lochmiller describing the project as part of a broader AI infrastructure boom that represents “the largest capital investment in infrastructure in human history.” The Abilene site is among the world’s largest data centers, designed to house up to 400,000 Nvidia GB200 (Blackwell) GPUs across the halls. The facility will have a total power capacity of 1.2 GW, with on-site power generation and renewable energy sources, including West Texas wind farms, supporting its operations. Meanwhile, xAI’s Colossus supercomputer in Memphis currently operates with 200,000 Nvidia GPUs, with plans to scale up to 1 million GPUs in the future.
How Stargate is fundamentally changing data center design
Stargate throws out the standard data-center playbook. Instead of open-loop cooling towers that can evaporate millions of gallons a year, the Abilene campus pipes a chilled liquid mix straight onto the GPU cold plates and runs it in a sealed circuit. Crusoe says the scheme is “zero-water-evaporation,” so after an initial charge, on the order of a million gallons, as noted in the Bloomberg video, the loop just tops up minor losses from maintenance. Liquid’s higher heat capacity allows for denser rack configurations.
One of the most visible architectural shifts in Stargate involves cooling technology. Packing tens of thousands of NVIDIA Blackwell GPUs into such close proximity generates heat far exceeding what standard air-cooled systems can handle. Rather than conventional air cooling or evaporative cooling towers, Stargate will employs direct-to-chip liquid cooling within a closed-loop system, as Crusoe has noted. This closed-loop liquid cooling approach enables high rack density. Crusoe notes that “zero-water evaporation” is a goal at Abilene’s site. The implementation could potentially require little more than an initial fill on the order of a million gallons and minimal makeup water thereafter.
Architecturally, Stargate’s campus is designed like a factory for AI computation. Multiple identical buildings form a module-based “megacampus” for easy replication Bloomberg. As Larry Ellison of Oracle in a White House press conference earlier this year, “Each building is a half a million square feet.” The strategic siting of the Abilene facility in West Texas, near abundant wind resources and away from population centers. This architectural consistency across the campus likely simplifies construction logistics, site preparation, and potentially even cooling system deployment.
Grid and energy implications
Power is a central challenge for these AI megacenters. While the Abilene Stargate site alone is slated to demand about 1.2 gigawatts (GW) of electrical capacity once fully outfitted, this is just the beginning: OpenAI and its partners envision more campuses across the U.S., each supporting on the order of 1 GW or more. OpenAI has issued requests for proposals (RFPs) to 16 states, including Arizona, California, Florida, Louisiana, Maryland, Nevada, New York, Ohio, Oregon, Pennsylvania, Texas, Utah, Virginia, Washington, West Virginia, and Wisconsin.
ERCOT, the Texas grid operator, has taken note. Siting the first Stargate in West Texas was strategic: the region offers abundant renewable energy and available land. To ensure reliable supply despite wind’s variability, the project will implement a large on-site battery storage to store excess wind power and discharge during lulls; dedicated solar farm on campus for daytime loads; and 360 MW of on-site natural gas turbines for firm backup power. New high-voltage transmission infrastructure and substation upgrades are being planned. ERCOT’s challenge will be delivering 1+ GW of capacity to a single location without destabilizing the local grid.
Looking beyond Texas, the full Stargate rollout raises the question of how U.S. grids (not just ERCOT but also PJM, MISO, etc.) will handle clusters of AI data centers. There is talk of co-locating some of these campuses near existing power stations or on federal land. For instance, the U.S. Department of Energy has explored siting energy-hungry AI centers alongside new power generation on DOE lands Datacenter Frontier. Future sites could potentially explore small modular reactors (SMRs) and carbon capture technologies, though the technology is still in development. Most SMRs are unlikely to be commercially operational until the late 2020s or early 2030s.
Stargate’s $500B bet could force data-center and 1.2 GW grid rethink