On-line building energy optimization using deep reinforcement learning

E. Mocanu (Corresponding author), D.C. Mocanu, P.H. Nguyen, A. Liotta, M.E. Webber, M. Gibescu, J.G. Slootweg

Research output: Contribution to journalArticleAcademicpeer-review

19 Citations (Scopus)
58 Downloads (Pure)

Abstract

Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power systems and to help customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using deep reinforcement learning, a hybrid type of methods that combines reinforcement learning with deep learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and deep policy gradient, both of which have been extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly dimensional database includes information about photovoltaic power generation, electric vehicles and buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide real-time feedback to consumers to encourage more efficient use of electricity.

Original languageEnglish
Article number8356086
Pages (from-to)3698-3708
Number of pages11
JournalIEEE Transactions on Smart Grid
Volume10
Issue number4
DOIs
Publication statusPublished - Jul 2019

Fingerprint

Reinforcement learning
Advanced metering infrastructures
Energy management systems
Electric vehicles
Power generation
Electricity
Scheduling
Feedback
Planning
Deep learning

Keywords

  • Buildings
  • Deep Neural Networks
  • Deep Reinforcement Learning
  • Demand Response
  • Energy consumption
  • Learning (artificial intelligence)
  • Machine learning
  • Minimization
  • Optimization
  • Smart Grid
  • Smart grids
  • Strategic Optimization.
  • smart grid
  • strategic optimization
  • Deep reinforcement learning
  • deep neural networks
  • demand response

Cite this

Mocanu, E. ; Mocanu, D.C. ; Nguyen, P.H. ; Liotta, A. ; Webber, M.E. ; Gibescu, M. ; Slootweg, J.G. / On-line building energy optimization using deep reinforcement learning. In: IEEE Transactions on Smart Grid. 2019 ; Vol. 10, No. 4. pp. 3698-3708.
@article{7a5b75e46833497ea0ce4b77353a7a03,
title = "On-line building energy optimization using deep reinforcement learning",
abstract = "Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power systems and to help customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using deep reinforcement learning, a hybrid type of methods that combines reinforcement learning with deep learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and deep policy gradient, both of which have been extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly dimensional database includes information about photovoltaic power generation, electric vehicles and buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide real-time feedback to consumers to encourage more efficient use of electricity.",
keywords = "Buildings, Deep Neural Networks, Deep Reinforcement Learning, Demand Response, Energy consumption, Learning (artificial intelligence), Machine learning, Minimization, Optimization, Smart Grid, Smart grids, Strategic Optimization., smart grid, strategic optimization, Deep reinforcement learning, deep neural networks, demand response",
author = "E. Mocanu and D.C. Mocanu and P.H. Nguyen and A. Liotta and M.E. Webber and M. Gibescu and J.G. Slootweg",
year = "2019",
month = "7",
doi = "10.1109/TSG.2018.2834219",
language = "English",
volume = "10",
pages = "3698--3708",
journal = "IEEE Transactions on Smart Grid",
issn = "1949-3053",
publisher = "Institute of Electrical and Electronics Engineers",
number = "4",

}

On-line building energy optimization using deep reinforcement learning. / Mocanu, E. (Corresponding author); Mocanu, D.C.; Nguyen, P.H.; Liotta, A.; Webber, M.E.; Gibescu, M.; Slootweg, J.G.

In: IEEE Transactions on Smart Grid, Vol. 10, No. 4, 8356086, 07.2019, p. 3698-3708.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - On-line building energy optimization using deep reinforcement learning

AU - Mocanu, E.

AU - Mocanu, D.C.

AU - Nguyen, P.H.

AU - Liotta, A.

AU - Webber, M.E.

AU - Gibescu, M.

AU - Slootweg, J.G.

PY - 2019/7

Y1 - 2019/7

N2 - Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power systems and to help customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using deep reinforcement learning, a hybrid type of methods that combines reinforcement learning with deep learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and deep policy gradient, both of which have been extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly dimensional database includes information about photovoltaic power generation, electric vehicles and buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide real-time feedback to consumers to encourage more efficient use of electricity.

AB - Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power systems and to help customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using deep reinforcement learning, a hybrid type of methods that combines reinforcement learning with deep learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and deep policy gradient, both of which have been extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly dimensional database includes information about photovoltaic power generation, electric vehicles and buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide real-time feedback to consumers to encourage more efficient use of electricity.

KW - Buildings

KW - Deep Neural Networks

KW - Deep Reinforcement Learning

KW - Demand Response

KW - Energy consumption

KW - Learning (artificial intelligence)

KW - Machine learning

KW - Minimization

KW - Optimization

KW - Smart Grid

KW - Smart grids

KW - Strategic Optimization.

KW - smart grid

KW - strategic optimization

KW - Deep reinforcement learning

KW - deep neural networks

KW - demand response

UR - http://www.scopus.com/inward/record.url?scp=85046827366&partnerID=8YFLogxK

U2 - 10.1109/TSG.2018.2834219

DO - 10.1109/TSG.2018.2834219

M3 - Article

AN - SCOPUS:85046827366

VL - 10

SP - 3698

EP - 3708

JO - IEEE Transactions on Smart Grid

JF - IEEE Transactions on Smart Grid

SN - 1949-3053

IS - 4

M1 - 8356086

ER -