A Deep Reinforcement Learning Approach to Using Whole Building Energy Model for HVAC Optimal Control
PhD Proposal by Zhiang Zhang, PhD-BPD Candidate
Friday, 20 April 2018 | 8:30am
MMCH 415 IW Large Conference Room
Advisory committee:
Professor Khee Poh Lam (Chair)
Associate Professor Mario Berges
Associate Professor Gianni Di Caro
Assistant Professor Adrian Chong
Abstract:
Heating, ventilation, and air-conditioning (HVAC) system is the major source of energy consumption in non-residential buildings. While significant improvements have been achieved in system design and equipment efficiency, simple and static rule-based control strategy is still widely used in the supervisory level control of HVAC system. Significant energy saving can be achieved if optimal control strategies can be used. However, even though optimal control has strong theoretical support, its practical feasibility in HVAC system of multi-zone non-residential buildings is still limited. Most existing studies only apply to single-zone buildings and only use computer simulations to test the optimal control strategies. There exist the following challenges that prevent the easy implementation of the optimal control strategies in HVAC system:
Low-order model is usually required for the classical optimal control algorithms (such as model predictive control), but creating such model for HVAC system of multi-zone buildings is difficult. Therefore, most existing studies apply the algorithm to residential houses and single rooms only. There exist successful cases that apply the algorithms in multi-zone buildings. However, the buildings must be simplified for the low-order modeling or the complicated modeling approaches must be developed, which reduces the practicability of the control methods.
Whole building energy model (BEM) is potentially the suitable modeling method for HVAC system of multi-zone buildings, but its high model complexity and slow computational speed limit its application in the optimal control. The existing BEM-based optimal control methods do not have sophisticated optimization ability, so they can only be used to solve simple control problems. Moreover, real-life deployment of the BEM-based methods is not found.
“Model-free” reinforcement learning (RL) is an ideal candidate to use BEM for the optimal control, but conventional methods require expert knowledge to design the algorithm. It hence limits its scalability. Therefore, most reported studies focus on simple buildings only.
The recently proposed deep reinforcement learning (DRL) can effectively solve the problems of RL. However, its application in HVAC optimal control is only studied at an infant stage without any valuable implementation case studies reported.
The occupant-oriented control features, including real-time occupancy prediction and thermal sensation feedback, are usually not included in the existing studies of HVAC optimal control.
This thesis aims to develop an HVAC optimal control method that can be easily applied in non-residential multi-zone buildings. Therefore, a BEM-based optimal control method using DRL (DRL-BEMOC) will be proposed. The proposed method will be able to accommodate the occupant-oriented control abilities. A CPU-based DRL algorithm, asynchronous advantage actor critic (A3C), will be used in the method to train the artificial intelligence agent to learn HVAC optimal control policies. DRL-BEMOC will be deployed in two real-life multi-zone non-residential buildings in Pittsburgh, PA, USA, with one has a radiant system and the other has an all-air based system. A structured solution to solve the delayed reward problem of DRL, which may happen in controlling the buildings with the slow thermal response, will be proposed. A practical implementation guideline of DRL-BEMOC will be then developed based on the experience of the real-life deployments.