Operator Learning for Control

Let machine learning --- learn --- the complete maps from models to model-based designs, automate the maps' deployment in control algorithms, and reduce the control algorithms' computational burden. 

To successfully deploy modern control systems such as energy systems, transportation, robotics etc., we need algorithms that operate in real-time with stability and robustness guarantees. However, many of the algorithms in control theory require solving challenging differential and integral equations that take significant computational effort. This makes applying these algorithms in the real world challenging. 

In a set of recent works, we introduce a framework for eliminating the computation of controller gain functions in PDE control. We learn the nonlinear operator from the plant parameters to the control gains with neural operator - neural networks that learn in the functional space. We provide closed-loop stability guarantees (global exponential) under an NN-approximation of the feedback gains. While, in the existing PDE backstepping, finding the gain kernel requires (one offline) solution to an integral equation, the neural operator (NO) approach we propose learns the mapping from the functional coefficients of the plant PDE to the kernel function by employing a sufficiently high number of offline numerical solutions to the kernel integral equation, for a large enough number of the PDE model's different functional coefficients. We prove the existence of a neural operator approximation, with arbitrarily high accuracy, of the exact nonlinear continuous operator mapping PDE coefficient functions into gain functions.

Hyperbolic PDE [1]: Top row showcases open-loop instability and bottom row highlights the closed-loop state response (stability) for the learned controller.

Reactio-diffusion PDE [2]: The top row shows the mapping from system parameters to the controller gain kernel to be learned via neural operator; bottom row showcases closed-loop solutions with the learned kernel 

References:

[1] Luke Bhan, Yuanyuan Shi, and Miroslav Krstic, “Neural operators for bypassing gain and control computations in PDE backstepping,” accepted to IEEE Transactions on Automatic Control, early access, 2024.

[2] Miroslav Krstic, Luke Bhan, and Yuanyuan Shi, "Neural operators of backstepping controller and observer gain functions for reaction-diffusion PDEs,” accepted to Automatica, early access, 2024.

[4] Luke Bhan, Yuanyuan Shi, Iasson Karafyllis, Miroslav Krstic, and James B Rawlings, "Moving-Horizon Estimators for Hyperbolic and Parabolic PDEs in 1-D", accepted to American Control Conference (ACC), 2024.

[4] Luke Bhan, Yuanyuan Shi, and Miroslav Krstic, Operator Learning for Nonlinear Adaptive Control,” accepted at Annual Learning for Dynamics & Control Conference (L4DC), 2023.

[5] Luke Bhan, Yuanyuan Shi, and Miroslav Krstic, "Neural Operators for Hyperbolic PDE Backstepping Feedback Laws", 62nd IEEE Conference on Decision and Control (CDC), 2023.

[6] Luke Bhan, Yuanyuan Shi, Miroslav Krstic, "Neural operators for hyperbolic pde backstepping kernels", 62nd IEEE Conference on Decision and Control (CDC), 2023.

[7] Yuanyuan Shi, Zongyi Li, Huan Yu, Drew Steeves, Anima Anandkumar, Miroslav Krstic, "Machine learning accelerated pde backstepping observers", 61st IEEE Conference on Decision and Control (CDC), 2022.

[8] Maxence Lamarque, Luke Bhan, Yuanyuan Shi, and Miroslav Krstic, "Adaptive Neural-Operator Backstepping Control of a Benchmark Hyperbolic PDE", under review in Automatica, 2024.