Model-free Approaches for Real-time Distribution System Operation: A Comparison of Feedback Optimization and Reinforcement Learning

Onderzoeksoutput: Hoofdstuk in Boek/Rapport/CongresprocedureConferentiebijdrageAcademicpeer review

48 Downloads (Pure)

Samenvatting

With the proliferation of distributed energy resources, real-time control becomes critical to ensure that voltage and loading limits are maintained in power distribution systems. The lack of an accurate grid model and load data, however, renders traditional model-based optimization inapplicable in this context. To overcome this limitation, this paper aims to present and compare two model-free and forecast-free approaches for real-time distribution system operation via Lyapunov optimization-based online feedback optimization (OFO) and deep reinforcement learning (DRL), respectively. Simulation studies performed on a 97-node low-voltage system suggest that OFO significantly outperforms DRL by 24% less PV energy curtailment over a test week relative to the total possible generation while enforcing distribution grid limits and requiring minimal effort for training.
Originele taal-2Engels
TitelIEEE PowerTech 2025
UitgeverijInstitute of Electrical and Electronics Engineers
Aantal pagina's6
StatusGeaccepteerd/In druk - 2025
Evenement2025 IEEE PowerTech Kiel - Kiel, Duitsland
Duur: 29 jun. 20253 jul. 2025

Congres

Congres2025 IEEE PowerTech Kiel
Land/RegioDuitsland
StadKiel
Periode29/06/253/07/25

Vingerafdruk

Duik in de onderzoeksthema's van 'Model-free Approaches for Real-time Distribution System Operation: A Comparison of Feedback Optimization and Reinforcement Learning'. Samen vormen ze een unieke vingerafdruk.

Citeer dit