Biography
I am an Assistant Professor in the Electrical and Computer Engineering Department at UCSD. I am also affiliated with the Center for Machine Intelligence, Computing & Security and the Center for Energy Research. My research interests broadly lie in machine learning, dynamical systems and control, and sustainability. My lab focuses on various aspects of creating intelligent systems, with an emphasis on principled learning and control algorithms for sustainable energy and power systems.
Before joining UCSD, I was a postdoc fellow in the Computing and Mathematical Sciences Department at Caltech from 2020-2021, working with Adam Wierman and Anima Anandkumar. I obtained my Ph.D. from the Electrical and Computer Engineering Department at the University of Washington, advised by Baosen Zhang.
Recent talks and Collaborators Talks
Demand Response Model Identification and Behavior Forecast
Robust Online Voltage Control with Mistake Guarantee
News
2022/10: Invited talks at Allerton 2022 and INFORMS 2022 about "Stability-Constrained Reinforcement Learning for Distribution System Voltage Control". Check out our slides here
2022/10: Invited talk at the Pacific Northwest National Laboratory about Learning and Control for Sustainable Energy Systems.
2022/07: Papers accepted to the IEEE Conference on Decision and Control 2022 and IEEE SmartGridComm 2022, preprints available soon.
2022/06: Our work on "Demand Response Model Identification and Behavior Forecast with OptNet: a Gradient-based Approach" at the ACM International Conference on Future Energy Systems (ACM e-Energy) conference.
2022/06: Our paper Robust online voltage control with an unknown grid topology is presented at the ACM International Conference on Future Energy Systems (ACM e-Energy) and selected as Best Paper Finalist.
2022/03: Our paper: “Stable and Efficient Shapley Value-Based Reward Reallocation for Multi-Agent Reinforcement Learning of Autonomous Vehicles” is to be presented as ICRA 2022.
2022/03: Our paper on "Stability Constrained Reinforcement Learning for Real-Time Voltage Control" is accepted to American Control Conference (ACC) 2022. We incorporate constraints from Lyapunov theory to reinforcement learning that ensures the stability of the learned policy, and demonstrate its performance for voltage control in power systems!
2021/12: Our paper: “Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds” is presented at NeurIPS 2021.
2021/05: Our paper: “Safe Reinforcement Learning of Control-affine Systems with Vertex Networks” is presented at the 3rd Annual Learning for Dynamics & Control conference (L4DC).
2021/03: Our paper: “Stable Online Control of Linear Time-varying Systems” is presented at the 3rd Annual Learning for Dynamics & Control conference (L4DC).
2021/01: Our paper "A Practical End-to-End Inventory Management Model with Deep Learning" is accepted to Management Science.
I am co-organizing Control Meets Learning, a virtual seminar series on the intersection of control and learning. Check out our website for more details and recordings.
Prospective Students and Postdocs
We are looking for highly motivated and self-driven Ph.D. students and postdoctoral candidates with a strong mathematical background and foundation in machine learning, control, and energy systems. Both theoretical and empirical research is carried out in the group and students who can build bridges between the two, and also between different disciplines will be a good fit here. There are also a small number of positions for master/undergraduate research.
Contact
Email: yyshi@eng.ucsd.edu
Office: Jacobs Hall 4801 Lab: Jacobs Hall 1507
Copyright © Yuanyuan Shi. All rights reserved.