Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control

نویسندگان

چکیده

Controlling heating, ventilation and air-conditioning (HVAC) systems is crucial to improving demand-side energy efficiency. At the same time, thermodynamics of buildings uncertainties regarding human activities make effective management challenging. While concept model-free reinforcement learning demonstrates various advantages over existing strategies, literature relies heavily on value-based methods that can hardly handle complex HVAC systems. This paper conducts experiments evaluate four actor-critic algorithms in a simulated data centre. The performance evaluation based their ability maintain thermal stability while increasing efficiency adaptability weather dynamics. Because enormous significance practical use, special attention paid Compared model-based controller implemented into EnergyPlus, all applied reduce consumption by at least 10% simultaneously keeping hourly average temperature desired range. Robustness tests terms different reward functions conditions verify these results. With training, we also see smaller trade-off between reduction. Thus, Soft Actor Critic algorithm achieves stable with ten times less than on-policy methods. In this regard, recommend using future experiments, due both its interesting theoretical properties

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Autonomous HVAC Control, A Reinforcement Learning Approach

Recent high profile developments of autonomous learning thermostats by companies such as Nest Labs and Honeywell have brought to the fore the possibility of ever greater numbers of intelligent devices permeating our homes and working environments into the future. However, the specific learning approaches and methodologies utilised by these devices have never been made public. In fact little inf...

متن کامل

Reinforcement Learning: Model-free

Simply put, reinforcement learning (RL) is a term used to indicate a large family of dierent algorithms RL that all share two key properties. First, the objective of RL is to learn appropriate behavior through trialand-error experience in a task. Second, in RL, the feedback available to the learning agent is restricted to a reward signal that indicates how well the agent is behaving, but does ...

متن کامل

Reinforcement Learning for Continuous Stochastic Control Problems

This paper is concerned with the problem of Reinforcement Learning (RL) for continuous state space and time stocha.stic control problems. We state the Harnilton-Jacobi-Bellman equation satisfied by the value function and use a Finite-Difference method for designing a convergent approximation scheme. Then we propose a RL algorithm based on this scheme and prove its convergence to the optimal sol...

متن کامل

Benchmarking Deep Reinforcement Learning for Continuous Control

Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuou...

متن کامل

Depth Control of Model-Free AUVs via Reinforcement Learning

In this paper, we consider depth control problems of an autonomous underwater vehicle (AUV) for tracking the desired depth trajectories. Due to the unknown dynamical model of the AUV, the problems cannot be solved by most of modelbased controllers. To this purpose, we formulate the depth control problems of the AUV as continuous-state, continuous-action Markov decision processes (MDPs) under un...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Applied Energy

سال: 2021

ISSN: ['0306-2619', '1872-9118']

DOI: https://doi.org/10.1016/j.apenergy.2021.117164