Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Demand response for residential appliances using multi-agent reinforcement learning with price and solar power uncertainty

Shantanu, Kumar ; Choudhary, Niraj Kumar ; Singh, Nitin and Kumar, Krishna LU orcid (2025) In Energy Reports 14. p.3725-3737
Abstract

The electricity market exhibits significant uncertainty arising from rapid fluctuations in prices, variations in load demand, and the intermittent nature of renewable energy resources. Effectively managing residential energy under these dynamic conditions is a challenging task. Demand Response (DR) offers a practical solution by enabling the flexible scheduling of energy consumption in response to changing market signals. This paper proposes a residential energy management framework formulated as a multi-agent decision-making problem, where each household appliance is modelled as an autonomous agent that selects optimal actions while maintaining user comfort. The optimal control policy for each agent is learned using a Deep Q-Learning... (More)

The electricity market exhibits significant uncertainty arising from rapid fluctuations in prices, variations in load demand, and the intermittent nature of renewable energy resources. Effectively managing residential energy under these dynamic conditions is a challenging task. Demand Response (DR) offers a practical solution by enabling the flexible scheduling of energy consumption in response to changing market signals. This paper proposes a residential energy management framework formulated as a multi-agent decision-making problem, where each household appliance is modelled as an autonomous agent that selects optimal actions while maintaining user comfort. The optimal control policy for each agent is learned using a Deep Q-Learning (DQN) algorithm, which efficiently handles the large state–action space inherent in household scheduling. To address key sources of uncertainty, a Long Short-Term Memory (LSTM) network is employed for short-term electricity price forecasting, while a beta probability density function models the stochastic nature of solar power generation. The proposed system is evaluated under two pricing mechanisms Real-Time Pricing (RTP) and Time-of-Use (ToU) across three distinct customer preference scenarios reflecting different trade-offs between comfort and cost. Simulation results demonstrate the algorithm’s capability to reduce electricity expenses while respecting user comfort constraints. Specifically, the daily electricity bills achieved under RTP and ToU are $6.04/$6.20 for Case 1, $4.62/$4.68 for Case 2, and $2.72/$2.82 for Case 3. Energy consumption remains similar for the first two cases, whereas Case 3 shows a lower demand under RTP, indicating that real-time pricing provides superior cost efficiency compared to ToU tariffs. These findings highlight the potential of integrating deep reinforcement learning with advanced forecasting techniques for resilient, cost-effective residential energy management in modern smart grids.

(Less)
Please use this url to cite or link to this publication:
author
; ; and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
Deep Q learning, Demand response, Electricity price forecasting, Multi agents system
in
Energy Reports
volume
14
pages
13 pages
publisher
Elsevier
external identifiers
  • scopus:105021135955
ISSN
2352-4847
DOI
10.1016/j.egyr.2025.10.047
language
English
LU publication?
yes
id
890cd1d0-5482-458d-aacb-b99b863ca911
date added to LUP
2025-12-08 14:58:29
date last changed
2025-12-08 14:59:25
@article{890cd1d0-5482-458d-aacb-b99b863ca911,
  abstract     = {{<p>The electricity market exhibits significant uncertainty arising from rapid fluctuations in prices, variations in load demand, and the intermittent nature of renewable energy resources. Effectively managing residential energy under these dynamic conditions is a challenging task. Demand Response (DR) offers a practical solution by enabling the flexible scheduling of energy consumption in response to changing market signals. This paper proposes a residential energy management framework formulated as a multi-agent decision-making problem, where each household appliance is modelled as an autonomous agent that selects optimal actions while maintaining user comfort. The optimal control policy for each agent is learned using a Deep Q-Learning (DQN) algorithm, which efficiently handles the large state–action space inherent in household scheduling. To address key sources of uncertainty, a Long Short-Term Memory (LSTM) network is employed for short-term electricity price forecasting, while a beta probability density function models the stochastic nature of solar power generation. The proposed system is evaluated under two pricing mechanisms Real-Time Pricing (RTP) and Time-of-Use (ToU) across three distinct customer preference scenarios reflecting different trade-offs between comfort and cost. Simulation results demonstrate the algorithm’s capability to reduce electricity expenses while respecting user comfort constraints. Specifically, the daily electricity bills achieved under RTP and ToU are $6.04/$6.20 for Case 1, $4.62/$4.68 for Case 2, and $2.72/$2.82 for Case 3. Energy consumption remains similar for the first two cases, whereas Case 3 shows a lower demand under RTP, indicating that real-time pricing provides superior cost efficiency compared to ToU tariffs. These findings highlight the potential of integrating deep reinforcement learning with advanced forecasting techniques for resilient, cost-effective residential energy management in modern smart grids.</p>}},
  author       = {{Shantanu, Kumar and Choudhary, Niraj Kumar and Singh, Nitin and Kumar, Krishna}},
  issn         = {{2352-4847}},
  keywords     = {{Deep Q learning; Demand response; Electricity price forecasting; Multi agents system}},
  language     = {{eng}},
  pages        = {{3725--3737}},
  publisher    = {{Elsevier}},
  series       = {{Energy Reports}},
  title        = {{Demand response for residential appliances using multi-agent reinforcement learning with price and solar power uncertainty}},
  url          = {{http://dx.doi.org/10.1016/j.egyr.2025.10.047}},
  doi          = {{10.1016/j.egyr.2025.10.047}},
  volume       = {{14}},
  year         = {{2025}},
}