Model-free Approaches for Real-time Distribution System Operation: A Comparison of Feedback Optimization and Reinforcement Learning

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

48 Downloads (Pure)

Abstract

With the proliferation of distributed energy resources, real-time control becomes critical to ensure that voltage and loading limits are maintained in power distribution systems. The lack of an accurate grid model and load data, however, renders traditional model-based optimization inapplicable in this context. To overcome this limitation, this paper aims to present and compare two model-free and forecast-free approaches for real-time distribution system operation via Lyapunov optimization-based online feedback optimization (OFO) and deep reinforcement learning (DRL), respectively. Simulation studies performed on a 97-node low-voltage system suggest that OFO significantly outperforms DRL by 24% less PV energy curtailment over a test week relative to the total possible generation while enforcing distribution grid limits and requiring minimal effort for training.
Original languageEnglish
Title of host publicationIEEE PowerTech 2025
PublisherInstitute of Electrical and Electronics Engineers
Number of pages6
Publication statusAccepted/In press - 2025
Event2025 IEEE PowerTech Kiel - Kiel, Germany
Duration: 29 Jun 20253 Jul 2025

Conference

Conference2025 IEEE PowerTech Kiel
Country/TerritoryGermany
CityKiel
Period29/06/253/07/25

Fingerprint

Dive into the research topics of 'Model-free Approaches for Real-time Distribution System Operation: A Comparison of Feedback Optimization and Reinforcement Learning'. Together they form a unique fingerprint.

Cite this