ML Accelerated PDE Control
To successfully deploy modern control systems such as energy systems, transportation, robotics etc., we need algorithms that operate in real-time with stability and robustness guarantees. However, many of the algorithms in control theory require solving challenging differential and integral equations that take significant computational effort. This makes applying these algorithms in the real world challenging.
In a set of recent works, we introduce a framework for eliminating the computation of controller gain functions in PDE control. We learn the nonlinear operator from the plant parameters to the control gains with neural operator - neural networks that learn in the functional space. We provide closed-loop stability guarantees (global exponential) under an NN-approximation of the feedback gains. While, in the existing PDE backstepping, finding the gain kernel requires (one offline) solution to an integral equation, the neural operator (NO) approach we propose learns the mapping from the functional coefficients of the plant PDE to the kernel function by employing a sufficiently high number of offline numerical solutions to the kernel integral equation, for a large enough number of the PDE model's different functional coefficients. We prove the existence of a neural operator approximation, with arbitrarily high accuracy, of the exact nonlinear continuous operator mapping PDE coefficient functions into gain functions.
Neural operators for bypassing gain and control computations in PDE backstepping
Hyperbolic PDE : Top row showcases open-loop instability and bottom row highlights the closed-loop state response (stability) for the learned controller.
Neural operators of backstepping controller and observer gain functions for reaction-diffusion PDEs
Reactio-diffusion PDE : The top row shows the mapping from system parameters to the controller gain kernel to be learned via neural operator; bottom row showcases closed-loop solutions with the learned kernel
Operator Learning for Nonlinear Adaptive Control
 L. Bhan, Y. Shi, and M. Krstic, “Neural operators for bypassing gain and control computations in PDE backstepping,” IEEE Transactions on Automatic Control, under review.
 M. Krstic, L. Bhan, and Y. Shi, "Neural operators of backstepping controller and observer gain functions for reaction-diffusion PDEs,” Automatica, under review.
 L. Bhan, Y. Shi, and M. Krstic, “Operator Learning for Nonlinear Adaptive Control,” accepted at Annual Learning for Dynamics & Control Conference (L4DC), 2023.