Research
For the updated list of publications, please see my google scholar page.
All Publications
Preprints
Yujia Huang, Ivan Dario Jimenez Rodriguez, Huan Zhang, Yuanyuan Shi, and Yisong Yue, "FI-ODE: Certified and Robust Forward Invariance in Neural ODEs", arXiv: 2210.16940. [PDF]
Wenqi Cui, Yan Jiang, Baosen Zhang, and Yuanyuan Shi, "Structured Neural-PI Control for Networked Systems: Stability and Steady-State Optimality Guarantees", arXiv:2206.00261. [PDF]
Sahin Lale, Yuanyuan Shi, Guannan Qu, Kamyar Azizzadenesheli, Adam Wierman, and Anima Anandkumar, "KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed Stability in Nonlinear Dynamical Systems", arXiv:2206.01704. [PDF]
Jeffrey Ma, Alistair Letcher, Florian Schäfer, Yuanyuan Shi, and Anima Anandkumar, "Polymatrix Competitive Gradient Descent", arXiv:2111.08565. [PDF]
Journal Papers
Published:
L. Zhou, Y. Shi, J. Wang, and P. Yang, “A balanced heuristic mechanism for multirobot task allocation of intelligent warehouses,” Mathematical Problems in Engineering, 2014.
Y. Shi, B. Xu, D. Wang, and B. Zhang, “Using battery storage for peak shaving and frequency regulation: Joint optimization for superlinear gains,” IEEE Transactions on Power Systems, vol. 33(3), pp. 2882-2894, 2018.
B. Xu, Y. Shi, D. Kirschen, and B. Zhang, “Optimal battery participation in frequency regulation markets,” IEEE Transactions on Power Systems, vol. 33(6), pp. 6715-6725, 2018.
Y. Shi, B. Xu, Y. Tan, D. Kirschen, and B. Zhang, “Optimal battery control under cycle aging mechanisms in pay for performance settings,” IEEE Transactions on Automatic Control, vol. 64(6), pp. 2324-2339, 2018.
Y. Chen, Y. Shi, and B. Zhang, “Data-driven optimal voltage regulation using input convex neural networks,” Electric Power Systems Research, vol. 189, 2020.
M. Qi, Y. Shi, Y. Qi, C. Ma, R. Yuan, D. Wu, and M. Z. Shen, “A Practical End-to-End Inventory Management Model with Deep Learning,” Management Science, to appear.
Under Review & Revisions:
J. Feng, W. Cui, J. Cortes, and Y. Shi, “Bridging Transient and Steady-State Performance in Voltage Control: A Reinforcement Learning Approach with Safe Gradient Flow”, IEEE Control Systems Letters, under review.
M. Krstic, L. Bhan, and Y. Shi, "Neural operators of backstepping controller and observer gain functions for reaction-diffusion PDEs,” Automatica, under review.
L. Bhan, Y. Shi, and M. Krstic, “Neural operators for bypassing gain and control computations in PDE backstepping,” IEEE Transactions on Automatic Control, under review.
J. Feng, Y. Shi, G. Qu, S. Low, A. Anandkumar, and A. Wierman, “Stability Constrained Reinforcement Learning for Real-Time Voltage Control in Distribution Systems”, IEEE Transactions on Control of Network Systems, under review.
Y. Shi, and B. Xu, “End-to-End Demand Response Model Identification and Baseline Estimation with Deep Learning”, IET Renewable Power Generation, under review.
Y. Bian, N. Zheng, Y. Zheng, B. Xu, and Y. Shi, “Predicting Strategic Energy Storage Behaviors,” IEEE Transactions on Smart Grid, under review.
Y. Zhang, S. Dey, and Y. Shi, “Optimal pricing to manage EV charging power in bilevel power-transportation networks”, IEEE Transactions on Smart Grid, under review.
Conference Papers
Published:
Y. Shi, B. Xu, B. Zhang, D. Wang, “Leveraging energy storage to optimize data center electricity cost in emerging power markets”, ACM International Conference on Future Energy Systems (ACM e-Energy), 2016.
Y. Chen, Y. Shi, and B. Zhang, “Modeling and optimization of complex building energy systems with deep neural networks”, Asilomar Conference on Signals, Systems, and Computers, 2017.
B. Xu, Y. Shi, D. Kirschen, and B. Zhang, “Optimal regulation response of batteries under cycle aging mechanisms,” IEEE Conference on Decision and Control, 2017.
Y. Shi, B. Xu, Y. Tan, and B. Zhang, “A convex cycle-based degradation model for battery energy storage planning and operation,” American Control Conference (ACC), 2018.
Y. Chen, Y. Shi, and B. Zhang, “Optimal Control Via Neural Networks: A Convex Approach,” In International Conference on Learning Representations (ICLR), 2019.
Y. Shi, K. Xiao, D.J. Mankowitz, R. Jeong, N. Levine, S. Gowal, T. Mann, and T. Hester, “Data-Driven Robust Reinforcement Learning for Continuous Control," in Safety and Robustness in Decision Making Workshop, Neural Information Processing Systems (NeurIPS SRDM), 2019.
K. Xiao, S. Gowal, T. Hester, R. Jeong, D.J. Mankowitz, Y. Shi, and T.W. Weng, “Learning Neural Dynamics Simulators With Adversarial Specification Training," in Safety and Robustness in Decision Making Workshop, Neural Information Processing Systems (NeurIPS SRDM), 2019.
D. Mankowitz, N. Levine, R. Jeong, Y. Shi, J. Kay, A. Abdolmaleki, J. Springenberg, T. Mann, T. Hester, M. Riedmiller, “Robust Reinforcement Learning for Continuous Control with Model Misspecification,” International Conference on Learning Representations (ICLR), 2020.
Y. Shi, and B. Zhang, “Multi-agent reinforcement learning in Cournot games,” IEEE Conference on Decision and Control (CDC), 2020.
L. Zheng, Y. Shi. L. Ratliff, and B. Zhang, “Safe reinforcement learning of control-affine systems with vertex networks,” Learning for Dynamics and Control (L4DC), 2021.
G. Qu, Y. Shi, S. Lale, A. Anandkumar, and A. Wierman, “Stable online control of linear time-varying systems,” Learning for Dynamics and Control (L4DC), 2021.
A. Pan, Y. Lee, H. Zhang, Y. Chen, and Y. Shi, “Improving Robustness of Reinforcement Learning for Power System Control with Adversarial Training”, in Reinforcement Learning for Real Life Workshop, International Conference on Machine Learning (ICML RL4RL), 2021.
Y. Huang, H. Zhang, Y. Shi, Z. Kolter, and A. Anandkumar, “Training certifiably robust neural networks with efficient local lipschitz bounds,” Advances in Neural Information Processing Systems (NeurIPS), 2021.
S. Han, H. Wang, S. Su, Y. Shi, and F. Miao, “Stable and Efficient Shapley Value-Based Reward Reallocation for Multi-Agent Reinforcement Learning of Autonomous Vehicles,” IEEE International Conference on Robotics and Automation (ICRA), 2022.
Y. Shi, G. Qu, S. Low, A. Anandkumar, and A. Wierman, "Stability Constrained Reinforcement Learning for Real-Time Voltage Control," American Control Conference (ACC), 2022.
Y. Bian, N. Zheng, Y. Zheng, B. Xu, and Y. Shi, “Demand response model identification and behavior forecast with OptNet: a gradient-based approach,” ACM International Conference on Future Energy Systems (ACM e-Energy), 2022.
C. Yeh, J. Yu, Y. Shi, and A. Wierman, “Robust online voltage control with an unknown grid topology,” ACM International Conference on Future Energy Systems (ACM e-Energy), 2022.
Y. Chen, Y. Shi, D. Arnold, and S. Peisert, “SAVER: Safe Learning-Based Controller for Real-Time Voltage Regulation”, IEEE Power &Energy Society General Meeting (PESGM), 2022.
K. Cheng, Y. Bian, Y. Shi, and Y. Chen, “Carbon-Aware EV Charging,” IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm), 2022.
Y. Shi, Z. Li, H. Yu, D. Steeves, A. Anandkumar, and M. Krstic, “Machine Learning Accelerated PDE Backstepping Observers,” IEEE Conference on Decision and Control (CDC), 2022.
Accepted for Publication (Not Yet Presented):
L. Bhan, Y. Shi, and M. Krstic, “Operator Learning for Nonlinear Adaptive Control,” accepted at Annual Learning for Dynamics & Control Conference (L4DC), 2023.
N. Zheng, X. Liu, B. Xu, and Y. Shi, “Energy Storage Price Arbitrage via Opportunity Value Function Prediction,” accepted at IEEE Power & Energy Society General Meeting (PESGM), 2023.
K. Cheng, Y. Chen, and Y. Shi, “GridViz: a Toolkit for Interactive and Multi-Modal Power Grid Data Visualization,” accepted at IEEE Power & Energy Society General Meeting (PESGM), 2023.
C. Zhang, Y. Shi, and Y. Chen, “BEAR: Physics-Principled Building Environment for Control and Reinforcement Learning”, accepted at ACM International Conference on Future Energy Systems (ACM e-Energy), 2023.