Prof. Yuanyuan Shi
Email: yyshi@ucsd.edu
Office Hour: Monday 2-3pm, FAH 3009
Daniela Rojas
Email: d2rojas@ucsd.edu
Office Hour: Wednesday 5-6pm, FAH2009
Rich Pai, cpai@ucsd.edu
Office Hour: Thursday 1-2pm, FAH2009
Class Time and Location: MW 3:30pm-4:50pm, EBU1 Room#2315
Course Introduction & Schedule
This new interdisciplinary graduate course bridges power systems and modern AI and data science. Students will learn both the fundamentals of power systems, as well as advanced AI techniques, with a focus in the Winter 2026 offering on how Agentic AI and Large Language Models (LLMs) can transform the operation, optimization, and control of future smart grids.
Power System Topics
Basics of Power Systems
Demand Response and Load Flexibility
Renewable Power and Integration
Economic Dispatch and Unit Commitment
Power Flows
Introduction to Electricity Markets
Introduction to Power System Control
Textbook: Power Systems: Fundamental Concepts and the Transition to Sustainability, Daniel Kirschen
Agentic AI and LLM Topics
LLM Fundamentals – Transformers & Tokens
Vaswani, A. et al. (2017). Attention Is All You Need. NeurIPS 30. https://arxiv.org/abs/1706.03762
Devlin, J. et al. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL. https://arxiv.org/abs/1810.04805
Kudo, T. & Richardson, J. (2018). SentencePiece: A Simple and Language-Independent Sub-word Tokenizer. EMNLP-Demo https://arxiv.org/abs/1808.06226
Holtzman, A. et al. (2019). The Curious Case of Neural Text Degeneration. ICLR. https://arxiv.org/abs/1904.09751
LLM Fundamentals II – Scaling Laws & Pre-training
Brown, T. et al. (2020). Language Models Are Few-Shot Learners (GPT-3). NeurIPS 33. https://arxiv.org/abs/2005.14165
Kaplan, J. et al. (2020). Scaling Laws for Neural Language Models. https://arxiv.org/abs/2001.08361
Raffel, C. et al. (2020). Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (T5). JMLR 21 https://arxiv.org/abs/1910.10683
Prompt Engineering, Embeddings & Retrieval-Augmented Generation (RAG)
Wei, J. et al. (2022). Chain-of-Thought Prompting Elicits Reasoning in LLMs. NeurIPS 35. https://arxiv.org/abs/2201.11903
Wang, X. et al. (2022). Self-Consistency Improves Chain-of-Thought Reasoning. NeurIPS 35. https://arxiv.org/abs/2203.11171
Lewis, P. et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP. NeurIPS 33. https://arxiv.org/abs/2005.11401
Fine-tuning
Hu, E. et al. (2021). LoRA: Low-Rank Adaptation of Large Language Models. ICML 38 (Δ). https://arxiv.org/abs/2106.09685
Ding, N. et al. (2024). Parameter-Efficient Fine-Tuning: A Survey. IEEE TPAMI early access https://arxiv.org/abs/2403.14608
Alignment & RL
Ouyang, L. et al. (2022). Training Language Models to Follow Instructions with Human Feedback (InstructGPT). NeurIPS 35. https://arxiv.org/abs/2203.02155
Rafailov, R. et al. (2023). Direct Preference Optimization: Your LM Is Secretly a Reward Model. ICLR 2024. https://arxiv.org/abs/2305.18290
Agents I — Planning, Tool Use & API Calling
Yao, S. et al. (2022). ReAct: Synergizing Reasoning and Acting in LLMs. NeurIPS Workshop. https://arxiv.org/abs/2210.03629
Schick, T. et al. (2023). Toolformer: Language Models Can Teach Themselves to Use Tools. EMNLP Findings. https://arxiv.org/abs/2302.04761
Erdoğan, Y. et al. (2025). Plan-and-Act: Improving Planning of Agents for Long-Horizon Tasks. ICML 2025 https://arxiv.org/abs/2503.09572
Agents II — Memory, Reflection & Evaluation
Shinn, N. et al. (2023). Reflexion: Language Agents with Verbal Reinforcement Learning. NeurIPS 2023 Workshop. https://arxiv.org/abs/2303.11366
Park, J. S. et al. (2023). Generative Agents: Interactive Simulacra of Human Behavior. ACM Transactions on Human–Computer Interaction (CHI ’24 journal version) https://arxiv.org/abs/2304.03442