Asset Classes

Understanding the Interconnection Queue: What Every Data Center Developer Needs to Know

The interconnection queue has become the single biggest variable in data center development timelines. This guide covers queue mechanics, cost drivers, how to monitor substation headroom, and where AI is being applied to track queue risk.

by Build Team March 11, 2026 8 min read

Understanding the Interconnection Queue: What Every Data Center Developer Needs to Know

The interconnection queue is now the single biggest variable in data center development timelines, and most developers are still underestimating it.

The power is available. The land is entitled. The fiber is mapped. And the project still stalls for 36 months because of the interconnection queue.

For data center developers, the grid is no longer a background assumption. It is the critical path. Understanding how utility interconnection queues work, how long they take, what drives cost and how to track your position, has become a core competency for anyone building at scale.

What the Interconnection Queue Actually Is

When a new large load connects to the electric grid, it doesn't just flip a switch. It enters a formal queue managed by the relevant regional transmission organization (RTO) or utility, which then runs a series of studies to determine how the new load affects grid stability and what infrastructure upgrades are required.

The studies progress in phases:

  • Phase 1 (Feasibility Study): High-level impact screen. Typically 3-6 months.

  • Phase 2 (System Impact Study): Detailed analyzis of transmission constraints. Often 6-18 months.

  • Phase 3 (Facilities Study): Specific upgrades identified and costed. Another 6-12 months.

By the time the studies complete and a Generator Interconnection Agreement (or equivalent load interconnection agreement) is executed, it is not unusual to have spent 24-36 months in the queue and that's before a single transformer is ordered.

The Queue Backlog Problem

Demand for new grid interconnection has surged since 2022. According to Lawrence Berkeley National Laboratory's 2025 Queued Up report, more than 2,600 GW of generation and storage capacity sat in the national interconnection queue as of mid-2025, up from under 500 GW in 2017. Load interconnection requests, dominated by data centers, EV charging and electrification, are growing at a similar rate.

The practical effect: queues are backing up. MISO, PJM and CAISO have all implemented queue reform processes in the past two years, moving from first-come/first-served to cluster study approaches. This improves grid planning efficiency but creates new complexity for developers tracking project position.

For a data center developer today, entering a queue in PJM or ERCOT in 2026 realistically means no interconnection agreement before 2028 in most cases. Projects in constrained load pockets, Northern Virginia, Phoenix, Chicago suburbs, face even longer timelines.

What Drives Interconnection Cost

Interconnection cost is the other variable developers systematically underestimate. Two projects in the same substation service area can face dramatically different upgrade bills depending on what triggered their study.

Key cost drivers:

Transmission constraints. If your project loads a constrained transmission line, you may be allocated a share of the cost to upgrade or build relief capacity. These costs can run to tens of millions of dollars and are difficult to forecast early in the study process.

Substation capacity. Substations have physical limits. A developer requesting a 100 MW service connection at a substation already running at 80% capacity will trigger bus upgrades, transformer replacements or new switching gear. Typical costs: $5M-$25M depending on voltage class and region.

Network upgrade allocations. Under some tariffs, large load additions are allocated a share of upstream network upgrades that their project "caused". The allocation methodology varies by RTO and has been subject to ongoing FERC rulemaking.

Queue position dynamics. If a project ahead of you in the queue withdraws, your upgrade costs may change significantly. Projects behind you can also affect your study results if they enter before your study closes.

This variability is why interconnection cost is often the single largest contingent line item in a data center pro forma.

The Queue Monitoring Imperative

Most development teams track interconnection as a milestone, not a market. That is the wrong frame.

The queue is a market. Positions are filed, withdrawn and reassigned constantly. Substation capacity is a finite resource that developers are competing for in real time. The team that knows which substations have available headroom, which are queued to capacity and which have near-term upgrade completions scheduled has a structural siting advantage.

What to monitor:

  • RTO queue filings and withdrawals, most RTOs publish queue data, but it requires regular parsing

  • Substation load flow data, available via utility and RTO databases, but often in raw form

  • FERC docket activity, tariff changes and rulemaking affect queue dynamics with 6-18 month lead times

  • Utility capital plans, IOU 10-year capital plans disclose substation upgrades, line reconductoring and new transmission investment

Historically, this required a dedicated power engineer and weeks of manual database work. AI-assisted tools are changing that, automating the extraction and synthesis of queue data from MISO, PJM, ERCOT, WECC and utility-specific systems into structured site scoring models.

How AI Is Helping Development Teams Track the Queue

The core challenge is data heterogeneity. Every RTO publishes queue data in a different format. Utility interconnection databases are inconsistent. FERC dockets require manual parsing. Aggregating a coherent picture of grid availability for a target market has traditionally required months of specializt work.

AI is useful here in three specific ways:

1. Queue data aggregation and parsing. AI agents can pull, clean and structure interconnection queue data from multiple RTO sources on a recurring basis, flagging substations with available headroom and changes in queue position.

2. Cost estimation modeling. By combining network topology data, existing load growth curves and historical study outcomes, AI models can generate probabilistic ranges for interconnection cost earlier in the site evaluation process, narrowing uncertainty before committing to land.

3. Timeline projection. Based on queue depth, study phase, historical study durations for a given RTO and current FERC rulemaking status, AI can generate realistic interconnection timeline distributions. Not point estimates, but ranges with a credible basis.

These capabilities are being deployed by the most sophisticated data center development teams today. The teams still relying on broker calls and single-point power availability estimates are operating with a systematically incomplete picture.

Developer Checklist

Before any serious site commitment in a new market, a data center developer should be able to answer:

  • Which substations in the target area have available capacity, and at what voltage?

  • What is the current queue depth at those substations?

  • What network upgrades are planned in the next 3-5 years that could increase or constrain capacity?

  • What is the realistic interconnection cost range, not just the optimiztic case?

  • What is the realistic interconnection timeline, including study phases, under current queue conditions?

If you cannot answer these questions with data, you are pricing risk by assumption. In the current grid environment, that is an expensive habit.

The interconnection queue is the leverage point that separates disciplined data center developers from expensive surprises. It rewards the teams who treat it as an active market to monitor, not a checkbox to close.