Americans spend over 90% of their time in different indoor spaces and buildings, where the air quality significantly impacts our health, productivity, well-being, and learning. At the same time, residential and commercial buildings are among the largest sources of carbon dioxide and other greenhouse gas emissions, responsible for 74% of the U.S. electricity use and more than one-third of total U.S. emissions in 2021.
Therefore, AI for building system control represents a great opportunity for achieving both public health and sustainability goals. However, optimizing for indoor air quality and building energy efficiency faces a challenging tradeoff. As an example, since the Spring of 2020, UC San Diego has operated with maximum ventilation rates during working hours. This has increased the energy consumption of the buildings by 2-2.5 times than the nominal energy consumption. Clearly, such an operating strategy is “unsustainable” due to skyrocketing electricity costs, excess carbon emissions, and reduced lifetime and integrity of the mechanical systems. However, reducing ventilation rates can risk poor indoor air quality and increased respiratory infections.
Therefore, it is imperative to develop an integrated control framework that simultaneously ensures a comfortable and healthy indoor environment while minimizing the energy consumption.
Energy-efficient and Healthy Buildings: A Differentiable PDE Approach
Our PDE-based building modeling and control framework consists of two tasks [1]:
In the building model learning task, the goal is to estimate the unknown building parameters that govern the fluid dynamics. The airflow velocity field is modeled by the Navier-Stokes equations, and the dynamics of CO2 concentration and temperature are modeled with convection-diffusion PDEs.
In the building control task, the goal is to minimize the energy consumption while ensuring thermal comfort and air quality, by optimizing the supply airflow rate and supply air temperature set-points. Both the building model learning and control tasks as PDE-constrained optimization problems, which we solve via the Adjoint method.
Compared to existing control methods such as rule-based building control policy, and model predictive control with (learned) ODE models, our PDE-based approach achieves a significant reduction in energy consumption while maintaining occupants' comfort and health constraints at all time. Compared to the rule-based maximum airflow policy, our method achieves a 52.6% reduction in energy consumption. Additionally, we see energy savings of 36.4% and 10.3% compared to MPC with learned ODE models and model-based RL with learned ODE models, respectively.
Data-driven Operator Learning to Accelerate Building PDE Modeling and Control
Open-source Simulator and Dataset for Benchmarking Building Learning & Control Methods
One challenge of building research is the lack of a benchmark simulation environment for developing and evaluating different RL algorithms with realistic building models. In a recent work, we propose “BEAR” [3], a physics-principled Building Environment for Control and Reinforcement Learning. The platform allows researchers to benchmark both model-based and model-free controllers using a broad collection of standard building models in Python without co-simulation using external building simulators. BEAR is available at https://github.com/chz056/BEAR
We also release real-world building operation data from UC San Diego campus [4], available at BEAR-Data at: https://ucsdsmartbuilding.github.io/DATA.html
Last but not least, together with research team at Caltech, we have incorporated the developed building simulator, with simulators of Datacenters, EV charging, energy storage, and a electricity generator in Sustaingym - an RL environment for future sustainable energy system control [5].
References:
[1] Yuexin Bian, Xiaohan Fu, Rajesh K. Gupta, and Yuanyuan Shi, "Ventilation and Temperature Control for Energy-efficient and Healthy Buildings: A Differentiable PDE Approach", Applied Energy, Volume 372, 2024.
[2] Yuexin Bian, Oliver Schmidt, and Yuanyuan Shi, "Data-driven Operator Learning for Energy-efficient Building Ventilation Control", under review, 2025.
[3] Chi Zhang, Yuanyuan Shi, and Yize Chen, "BEAR: Physics-Principled Building Environment for Control and Reinforcement Learning", ACM International Conference on Future Energy Systems (ACM e-Energy), 2023.
[4] Yuexin Bian, Xiaohan Fu, Bo Liu, Rohith Rachala, Rajesh K Gupta, and Yuanyuan Shi, "BEAR-Data: Analysis and Applications of an Open Multizone Building Dataset", ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation (ACM BuildSys), 2023.
[5] Christopher Yeh, Victor Li, Rajeev Datta, Julio Arroyo, Nicolas Christianson, Chi Zhang, Yize Chen, Mohammad Mehdi Hosseini, Azarang Golmohammadi, Yuanyuan Shi, Yisong Yue, and Adam Wierman, "SustainGym: Reinforcement Learning Environments for Sustainable Energy Systems", in Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS), 2023.