Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

Physics-Informed Reinforcement Learning Feasibility Study for Building Energy Optimization

Carrillo Sala, Antoni (2024)
Department of Automatic Control
Abstract
Buildings worldwide account for 30% of energy consumption and Heating, Ventilation and Air Conditioning (HVAC) represent roughly 38% of a building’s consumption. Therefore, energy savings are crucial for sustainability. The complexity of buildings, with diverse physical domains and large-scale components, presents challenges to achieving energy-efficient operation. Implementing high-performance controls is effective but takes time and requires qualified experts. Reinforcement learning (RL) offers adaptability but demands extensive data, making it difficult to scale to large systems. RL is extensively used in model-free environments, such as video games; however, when it comes to control the problem is a bit more challenging since it has to... (More)
Buildings worldwide account for 30% of energy consumption and Heating, Ventilation and Air Conditioning (HVAC) represent roughly 38% of a building’s consumption. Therefore, energy savings are crucial for sustainability. The complexity of buildings, with diverse physical domains and large-scale components, presents challenges to achieving energy-efficient operation. Implementing high-performance controls is effective but takes time and requires qualified experts. Reinforcement learning (RL) offers adaptability but demands extensive data, making it difficult to scale to large systems. RL is extensively used in model-free environments, such as video games; however, when it comes to control the problem is a bit more challenging since it has to achieve stability and robustness of the system. This project explores Physics-Informed RL (PIRL) for building energy optimization, focusing on the supervisory control level. Information from physical models is selected to accelerate learning, and the impact of reinforcement learning on a building’s cooling system is studied. Key questions include selecting appropriate information from physical models, determining data requirements, and exploiting the building system architecture for the scalability of PIRL. Dynamic models developed in the Modelica language with an open-source building library are used in the thesis. Numerical experiments are then performed to evaluate the scaling potential of PIRL. One goal is to understand and apply software in the loop methods using the PIRL methodology and Carrier automated logic building control software. It will be shown that physics information helps to reduce training time and that it is possible to save energy using PIRL, in comparison with the baseline controller. (Less)
Please use this url to cite or link to this publication:
author
Carrillo Sala, Antoni
supervisor
organization
year
type
H3 - Professional qualifications (4 Years - )
subject
report number
TFRT-6246
other publication id
0280-5316
language
English
id
9174404
date added to LUP
2024-09-16 08:47:34
date last changed
2024-09-16 08:47:34
@misc{9174404,
  abstract     = {{Buildings worldwide account for 30% of energy consumption and Heating, Ventilation and Air Conditioning (HVAC) represent roughly 38% of a building’s consumption. Therefore, energy savings are crucial for sustainability. The complexity of buildings, with diverse physical domains and large-scale components, presents challenges to achieving energy-efficient operation. Implementing high-performance controls is effective but takes time and requires qualified experts. Reinforcement learning (RL) offers adaptability but demands extensive data, making it difficult to scale to large systems. RL is extensively used in model-free environments, such as video games; however, when it comes to control the problem is a bit more challenging since it has to achieve stability and robustness of the system. This project explores Physics-Informed RL (PIRL) for building energy optimization, focusing on the supervisory control level. Information from physical models is selected to accelerate learning, and the impact of reinforcement learning on a building’s cooling system is studied. Key questions include selecting appropriate information from physical models, determining data requirements, and exploiting the building system architecture for the scalability of PIRL. Dynamic models developed in the Modelica language with an open-source building library are used in the thesis. Numerical experiments are then performed to evaluate the scaling potential of PIRL. One goal is to understand and apply software in the loop methods using the PIRL methodology and Carrier automated logic building control software. It will be shown that physics information helps to reduce training time and that it is possible to save energy using PIRL, in comparison with the baseline controller.}},
  author       = {{Carrillo Sala, Antoni}},
  language     = {{eng}},
  note         = {{Student Paper}},
  title        = {{Physics-Informed Reinforcement Learning Feasibility Study for Building Energy Optimization}},
  year         = {{2024}},
}