What Hyperscale Tenants Actually Require: A Developer's Checklist
The specifications hyperscale tenants demand are non-negotiable — and most development teams underestimate how many sites fail to qualify.
Hyperscale tenants — the Microsofts, Amazons, Googles, and Metas of the world — have elevated the minimum bar for data center development to a level that disqualifies most candidate sites before a formal RFP is issued. Understanding these requirements is not optional for developers targeting this market. It is the prerequisite.
This is a practitioner-level checklist for development teams evaluating sites against hyperscale criteria.
Power: The First and Most Decisive Filter
Hyperscale requirements start at power and often end there.
Most hyperscale leases begin at 20–50 MW per building, with campus-level commitments frequently exceeding 200 MW. The largest hyperscaler campuses — particularly those tied to AI training workloads — are now scoping 500 MW and above across multiple phases.
What developers need to confirm before any site goes under contract:
Utility reserve margin at the nearest substation or transmission node. Utilities in constrained markets (Northern Virginia, Phoenix, Chicago, Silicon Valley) often cannot commit delivery timelines for 50+ MW sites without a multi-year queue.
Interconnection queue position for any new substation development. The queue can add 3–6 years of lead time in regulated markets. Sites that require new transmission infrastructure are effectively off the table for near-term hyperscale delivery.
Transmission voltage. Hyperscalers target 230 kV or higher for large deployments. 138 kV delivery is workable for sub-100 MW facilities but adds conversion cost.
Power redundancy topology. The minimum standard is N+1 utility feed configuration. Hyperscalers with critical AI workloads increasingly specify 2N (full redundancy at every level).
PUE targets have tightened significantly. Hyperscalers now specify 1.2–1.3 PUE as a design target; anything above 1.4 is not competitive for new builds. Facilities in hot climates need air economizer or liquid cooling systems capable of meeting this threshold year-round.
Cooling: From Air to Liquid
The shift toward GPU-dense AI training clusters has accelerated the adoption of direct liquid cooling (DLC) and immersion cooling. A hyperscale tenant specifying AI training workloads will expect:
Rack power density of 30–100+ kW per rack depending on GPU generation. Traditional air-cooled rows top out around 10–15 kW per rack.
Rear-door heat exchangers or in-row cooling as a minimum floor spec for high-density zones.
On-site cooling infrastructure capable of handling partial load failures without relying on outside air economization alone.
Water consumption disclosures upfront. Major hyperscalers have internal sustainability mandates that impose caps on water usage effectiveness (WUE). Sites in water-stressed regions face scrutiny.
Developers who design for 10 kW per rack average density will lose hyperscale RFPs to competitors designing for 30+ kW. The specs published 5 years ago are no longer the benchmark.
Connectivity: Fiber First, Redundancy Always
Power gets most of the attention, but connectivity failures are a separate disqualifier.
Hyperscale tenants require:
Diverse fiber entry points into the building from physically separate paths. A single conduit serving both primary and backup fiber routes will not pass due diligence.
Multiple carrier options at the site, including access to dark fiber. Tenants with global networks want route optionality — not a single-carrier dependency.
Low-latency interconnection to major internet exchanges (IX). Site selection models weight latency to end users and to peering partners.
Meet-me room (MMR) capacity sized for the anticipated carrier count. MMR design is frequently underspecified in initial build plans.
For AI inference workloads specifically, proximity to population centers starts to matter again — unlike training workloads, which are location-agnostic. Developers building for inference need to model end-user latency explicitly.
Physical and Security Requirements
These are table-stakes for hyperscale, but frequently overlooked in early site planning:
Floor load capacity. Minimum 250 lbs/sq ft for battery backup and UPS systems; higher for immersion tank deployments.
Floor-to-ceiling height. 14–16 feet clear in the white space; more for facilities using overhead cable management.
Setbacks and blast radius. Hyperscalers specify minimum distances between buildings and perimeter fences, adjacent structures, and utilities.
Physical security layers. Vehicle barriers, access-controlled zones segmented by tenant space, 24/7 staffed security, biometric access at minimum.
Single-tenant flexibility. Many hyperscalers will not share a building with other tenants. Multi-tenant design assumptions can kill a deal.
Permitting and Entitlement: The Hidden Constraint
Power, cooling, and connectivity get the attention. Permitting kills more hyperscale timelines than any of them.
Key considerations:
Critical facility exemptions. Some jurisdictions fast-track data centers under critical infrastructure designations; others don't. This difference can mean 12–18 months in permit timeline.
Environmental review triggers. Large power draws and water consumption can trigger state-level environmental impact assessments, particularly in California, New York, and the Pacific Northwest.
Local moratoriums. Several counties in Northern Virginia and parts of Arizona have imposed or threatened development moratoriums in response to community opposition. Developers need to assess political risk at the county and municipal level, not just state.
The AI Workload Shift Changes the Checklist
Traditional data center specs were designed around cloud compute workloads: moderate density, predictable power draw, minimal thermal variation. AI training and inference workloads break most of those assumptions.
Developers entering the hyperscale market now need to treat GPU-dense AI infrastructure as the design baseline, not an edge case. That means:
Designing mechanical systems for 40–100 kW rack density from day one
Sizing power infrastructure for full-campus load from the start, not phased increments
Planning for cooling water infrastructure even on sites where it is not immediately required
The developers who win hyperscale commitments in 2026 and beyond are the ones who internalized these requirements two years ago. For those still calibrating, this checklist is the starting point.