Publisher: IEEE Transactions on Smart Grid, Vol. 12 No.1
Published on: 08/06/2020
DOI: https://doi-org.ezproxy.lib.vt.edu/10.1109/TSG.2020.3014055
Download the Paper
Abstract:
Buildings, as major energy consumers, can provide great untapped demand response (DR) resources for grid services. However, their participation remains low in real-life. One major impediment for popularizing DR in buildings is the lack of cost-effective automation systems that can be widely adopted. Existing optimization-based smart building control algorithms suffer from high costs on both building-specific modeling and on-demand computing resources. To tackle these issues, this paper proposes a cost-effective edge-cloud integrated solution using reinforcement learning (RL). Beside RL’s ability to solve sequential optimal decision-making problems, its adaptability to easy-to-obtain building models and the off-line learning feature are likely to reduce the controller’s implementation cost. Using a surrogate building model learned automatically from building operation data, an RL agent learns an optimal control policy on cloud infrastructure, and the policy is then distributed to edge devices for execution. Simulation results demonstrate the control efficacy and the learning efficiency in buildings of different sizes. A preliminary cost analysis on a 4-zone commercial building shows the annual cost for optimal policy training is only 2.25% of the DR incentive received. Results of this study show a possible approach with higher return on investment for buildings to participate in DR programs.
Keywords:
Buildings , Computational modeling , Optimal control , Cloud computing , Learning (artificial intelligence) , Training , Load management