Research

For the updated list of publications, please see my google scholar page.

 All Publications

Journal Papers

[13] M. Krstic, L. Bhan, and Y. Shi, "Neural operators of backstepping controller and observer gain functions for reaction-diffusion PDEs,” accepted to Automatica, early access, 2024.

[12] L. Bhan, Y. Shi, and M. Krstic, “Neural operators for bypassing gain and control computations in PDE backstepping,” accepted to IEEE Transactions on Automatic Control, early access, 2024.

[11] J. Feng, Y. Shi, G. Qu, S. Low, A. Anandkumar, and A. Wierman, “Stability Constrained Reinforcement Learning for Real-Time Voltage Control in Distribution Systems”, IEEE Transactions on Control of Network Systems, 2023.

[10] Y. Shi, and B. Xu, “End-to-End Demand Response Model Identification and Baseline Estimation with Deep Learning”, IET Renewable Power Generation, 2023.

[9] Y. Bian, N. Zheng, Y. Zheng, B. Xu, and Y. Shi, “Predicting Strategic Energy Storage Behaviors,” IEEE Transactions on Smart Grid, 2023.

[8] Y. Zhang, S. Dey, and Y. Shi, “Optimal pricing to manage EV charging power in bilevel power-transportation networks”, IEEE Transactions on Smart Grid, 2023.

[7] J. Feng, W. Cui, J. Cortes, and Y. Shi, “Bridging Transient and Steady-State Performance in Voltage Control: A Reinforcement Learning Approach with Safe Gradient Flow”, IEEE Control Systems Letters, 2023.

[6] M. Qi, Y. Shi, Y. Qi, C. Ma, R. Yuan, D. Wu, and M. Z. Shen, “A Practical End-to-End Inventory Management Model with Deep Learning,” Management Science, to appear.

[5] Y. Chen, Y. Shi, and B. Zhang, “Data-driven optimal voltage regulation using input convex neural networks,Electric Power Systems Research, vol. 189, 2020.

[4] Y. Shi,  B. Xu, Y. Tan, D. Kirschen, and B. Zhang, “Optimal battery control under cycle aging mechanisms in pay for performance settings,” IEEE Transactions on Automatic Control, vol. 64(6), pp. 2324-2339, 2018.

[3] B. Xu, Y. Shi,  D. Kirschen, and B. Zhang, “Optimal battery participation in frequency regulation markets,” IEEE Transactions on Power Systems, vol. 33(6), pp. 6715-6725, 2018.

[2] Y. Shi, B. Xu, D. Wang, and B. Zhang, “Using battery storage for peak shaving and frequency regulation: Joint optimization for superlinear gains,IEEE Transactions on Power Systems, vol. 33(3), pp. 2882-2894, 2018.

[1] L. Zhou, Y. Shi, J. Wang, and P. Yang, “A balanced heuristic mechanism for multirobot task allocation of intelligent warehouses,” Mathematical Problems in Engineering, 2014.

Conference Papers

[21] L. Bhan, Y. Shi, and M. Krstic, “Operator Learning for Nonlinear Adaptive Control,” Learning for Dynamics & Control Conference (L4DC), 2023.

[20] N. Zheng, X. Liu, B. Xu, and Y. Shi, “Energy Storage Price Arbitrage via Opportunity Value Function Prediction,” IEEE Power & Energy Society General Meeting (PESGM), 2023.

[19] K. Cheng, Y. Chen, and Y. Shi, “GridViz: a Toolkit for Interactive and Multi-Modal Power Grid Data Visualization,” IEEE Power & Energy Society General Meeting (PESGM), 2023.

[18] C. Zhang, Y. Shi, and Y. Chen, “BEAR: Physics-Principled Building Environment for Control and Reinforcement Learning, ACM International Conference on Future Energy Systems (ACM e-Energy), 2023.

[17] Y. Shi, Z. Li, H. Yu, D. Steeves, A. Anandkumar, and M. Krstic, “Machine Learning Accelerated PDE Backstepping Observers,” IEEE Conference on Decision and Control (CDC), 2022.

[16] K. Cheng, Y. Bian, Y. Shi, and Y. Chen, “Carbon-Aware EV Charging,” IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm), 2022.

[15] Y. Chen, Y. Shi, D. Arnold, and S. Peisert, “SAVER: Safe Learning-Based Controller for Real-Time Voltage Regulation”, IEEE Power &Energy Society General Meeting (PESGM), 2022.

[14] C. Yeh, J. Yu, Y. Shi, and A. Wierman, “Robust online voltage control with an unknown grid topology,” ACM International Conference on Future Energy Systems (ACM e-Energy), 2022.

[13] Y. Bian, N. Zheng, Y. Zheng, B. Xu, and Y. Shi, “Demand response model identification and behavior forecast with OptNet: a gradient-based approach,” ACM International Conference on Future Energy Systems (ACM e-Energy), 2022.

[12] Y. Shi, G. Qu, S. Low, A. Anandkumar, and A. Wierman, "Stability Constrained Reinforcement Learning for Real-Time Voltage Control," American Control Conference (ACC), 2022.

[11] S. Han, H. Wang, S. Su, Y. Shi, and F. Miao, “Stable and Efficient Shapley Value-Based Reward Reallocation for Multi-Agent Reinforcement Learning of Autonomous Vehicles,” IEEE International Conference on Robotics and Automation (ICRA), 2022.

[10] Y. Huang, H. Zhang, Y. Shi, Z. Kolter, and A. Anandkumar, “Training certifiably robust neural networks with efficient local lipschitz bounds,Advances in Neural Information Processing Systems (NeurIPS), 2021.

[9] G. Qu, Y. Shi, S. Lale, A. Anandkumar, and A. Wierman, “Stable online control of linear time-varying systems,” Learning for Dynamics and Control (L4DC), 2021.

[8] L. Zheng, Y. Shi. L. Ratliff, and B. Zhang, “Safe reinforcement learning of control-affine systems with vertex networks,” Learning for Dynamics and Control (L4DC), 2021.

[7] Y. Shi, and B. Zhang, “Multi-agent reinforcement learning in Cournot games,” IEEE Conference on Decision and Control (CDC), 2020.

[6] D. Mankowitz, N. Levine, R. Jeong, Y. Shi, J. Kay, A. Abdolmaleki, J. Springenberg, T. Mann, T. Hester, M. Riedmiller, “Robust Reinforcement Learning for Continuous Control with Model Misspecification,” International Conference on Learning Representations (ICLR), 2020.

[5] Y. Chen, Y. Shi, and B. Zhang, “Optimal Control Via Neural Networks: A Convex Approach,” In International Conference on Learning Representations (ICLR), 2019.

[4] Y. Shi, B. Xu, Y. Tan, and B. Zhang,A convex cycle-based degradation model for battery energy storage planning and operation,” American Control Conference (ACC), 2018.

[3] B. Xu, Y. Shi, D. Kirschen, and B. Zhang, “Optimal regulation response of batteries under cycle aging mechanisms,” IEEE Conference on Decision and Control, 2017.

[2] Y. Chen, Y. Shi, and B. Zhang, “Modeling and optimization of complex building energy systems with deep neural networks”, Asilomar Conference on Signals, Systems, and Computers, 2017.

[1] Y. Shi, B. Xu, B. Zhang, D. Wang, “Leveraging energy storage to optimize data center electricity cost in emerging power markets, ACM International Conference on Future Energy Systems (ACM e-Energy), 2016.

Workshop Presentations

A. Pan, Y. Lee, H. Zhang, Y. Chen, and Y. Shi, “Improving Robustness of Reinforcement Learning for Power System Control with Adversarial Training”, in Reinforcement Learning for Real Life Workshop, International Conference on Machine Learning (ICML RL4RL), 2021.

Y. Shi, K. Xiao, D.J. Mankowitz, R. Jeong, N. Levine, S. Gowal, T. Mann, and T. Hester, “Data-Driven Robust Reinforcement Learning for Continuous Control," in Safety and Robustness in Decision Making Workshop, Neural Information Processing Systems (NeurIPS SRDM), 2019.

K. Xiao, S. Gowal, T. Hester, R. Jeong, D.J. Mankowitz, Y. Shi, and T.W. Weng, “Learning Neural Dynamics Simulators With Adversarial Specification Training," in Safety and Robustness in Decision Making Workshop, Neural Information Processing Systems (NeurIPS SRDM), 2019.