The OpenADR Alliance produced a well-specified protocol for transmitting demand-response signals between virtual top nodes (VTNs) and virtual end nodes (VENs). What it deliberately left unspecified — asset selection order, partial response handling, curtailment cascade logic — is where most real-world DR program failures originate.
OpenADR 2.0b is a communication protocol, not a dispatch algorithm. The specification defines: event payload structure (oadrEvent objects with duration, signals, and target metadata), VTN-to-VEN delivery using push and pull modes over HTTPS, registration and opt-in/opt-out mechanisms for VENs, and reporting structures for demand-response event acknowledgments and telemetry.
The specification is silent on: which enrolled assets to curtail first when multiple assets could satisfy the MW reduction target, how to handle VENs that acknowledge an event but deliver only partial load reduction, what to do when cumulative curtailment reaches the target before all signaled assets have responded, and how to sequence a second curtailment event if the first event's assets are still in a recovery period.
These omissions are intentional — the Alliance designed OpenADR as a general-purpose interoperability standard, not as a utility-specific dispatch logic layer. That logic belongs to the VTN implementation. The problem is that most VTN software vendors either leave dispatch sequencing to the utility to configure or implement a naive first-in/first-out queue that performs poorly in real operations.
Consider a demand-response program with 40 enrolled commercial HVAC assets representing a combined curtailable capacity of 12 MW. When a curtailment event is triggered, naive dispatch sends signals to all 40 assets simultaneously, expecting them to collectively reduce load by the required amount.
The problems with this approach compound quickly. First, simultaneous dispatch creates a rebound effect: when all 40 assets exit curtailment at the same time (typically 30–60 minutes after event end), load spikes by 10–15% above baseline as HVAC systems recover temperature setpoints. Second, simultaneous dispatch obscures individual asset performance — you can't distinguish between assets that delivered their contracted MW reduction and assets that partially responded or failed silently. Third, ISO/RTO curtailment event settlements require per-asset performance data; aggregate dispatch logs don't satisfy most RTO audit requirements.
Staged dispatch — sending curtailment signals to assets in priority order, stopping once accumulated response meets the MW target — addresses all three problems. It also reduces participant fatigue on high-frequency DR programs where assets are curtailed more than 20 times per year.
Effective dispatch sequencing requires a priority model for each enrolled asset. The factors that belong in that model are not arbitrary — they reflect the operational and contractual reality of each asset's participation:
OpenADR 2.0b includes reporting structures for telemetry (oadrReport) that allow VENs to report actual load reduction in near-real-time. Most implementations ignore this capability and rely on post-event settlement data. This is an operational mistake.
Real-time telemetry from SCADA or smart meter polling — typically available at 5-minute resolution for commercial accounts with interval metering — enables two critical dispatch adaptations. First, if an asset acknowledges the curtailment event but telemetry shows no load reduction after 10 minutes, the dispatch engine can send a backup signal to the next asset in the priority queue without waiting for the event period to end. Second, if aggregate curtailment is tracking above the MW target, the engine can hold back lower-priority assets from receiving signals at all, preserving their curtailment capacity for subsequent events in the same day.
This last point matters for utilities operating under CAISO's Demand Response Auction Mechanism (DRAM) or PJM's Emergency Load Response Program, where assets are obligated to a specific number of curtailment events per year and over-dispatching erodes participant willingness to re-enroll.
ISO/RTO summer peaking events frequently require multiple curtailment events within the same operating day. A system designed around single-event logic will exhaust available DR capacity on the first event and have nothing left when a second event is triggered three hours later.
Cascade logic requires tracking the state of each enrolled asset across the full operating day: not just whether it is currently in a curtailment event, but when its recovery period ends, what fraction of its contracted annual event hours have been consumed, and what its thermal state is given recent curtailment history. This state must persist between dispatch decisions, which means it cannot live only in the VTN event queue — it requires a separate asset state model updated by telemetry at every SCADA polling cycle.
As we cover in our article on SCADA data quality problems that break forecast models, the reliability of this telemetry feed is not guaranteed. A DR dispatch engine that assumes telemetry is current without validating data age and quality will make incorrect dispatch decisions during exactly the high-stress operating conditions — grid stress events — when correct decisions matter most.
The dispatch trigger itself — the decision to initiate a curtailment event — should flow from a short-interval load forecast, not from real-time deviation alone. Waiting until actual load exceeds the threshold means dispatching into a problem that already exists; the DR response delay (typically 10–30 minutes for HVAC assets) means the imbalance will worsen before the response arrives.
A 15-minute-ahead load forecast enables pre-emptive dispatch: sending curtailment signals when the forecast projects exceedance, rather than when exceedance has already occurred. The technical requirement is a forecast with sufficient confidence interval resolution to distinguish between "load will likely exceed threshold" and "load might exceed threshold" — because spurious dispatch events erode participant trust and increase opt-out rates over time.
If you're running a DR program on OpenADR 2.0b today, the questions worth auditing are: Does your VTN have configurable dispatch ordering, or does it signal all assets simultaneously? Are you ingesting real-time telemetry during events or relying on post-settlement data? Do you track asset curtailment history at the interval level, or only per-event? And critically — what is your average response delivery rate, measured as aggregate MW reduced divided by aggregate MW contracted across all events in the past 12 months?
A response delivery rate below 75% is a sign of either poor asset prioritization, inadequate telemetry feedback, or a participant pool that has learned the program doesn't audit individual performance. All three are addressable with dispatch logic changes, but none of them are addressable by changing the OpenADR protocol version.
Configure asset priority weights, telemetry feedback thresholds, and cascade rules in the operator console — no custom development required.
Request Demo