Files

Résumé

Control systems operating in real-world environments often face disturbances arising from measurement noise and model mismatch. These factors can significantly impact the perfor- mance and safety of the system. In this thesis, we aim to leverage data to derive near-optimal solutions to robust control and optimization problems despite the uncertainties. The first part focuses on data-driven robust optimal control of linear systems for trajectory tracking under measurement noise. Existing data-driven methods based on behavioral system theory use historical system trajectories to formulate robust optimal control problems. These approaches employ regularization terms to avoid overfitting. However, the corresponding suboptimality bounds are conservative due to the influence of the regularization terms on prediction error analysis. To overcome this problem, we derive two prediction error bounds which can be embedded in regularization-free robust control methods. One is attained by using bootstrap methods when resampling is affordable and the other relies on perturbation analysis of the behavioral model. These bounds enable the design of open-loop control inputs and closed-loop controllers to minimize the upper bound for the worst-case cost, while ensuring robust constraint satisfaction and suboptimality bounds that decrease to zero as noise diminishes. The second part of this thesis addresses constrained optimization problems with model uncertainties. We assume that the objective and constraint functions are not known but can be queried while all the samples have to be feasible. This setting, called safe zeroth-order optimization, can be applied to various control problems including optimal power flow and controller tuning where system models are only partially known and safety (sample feasibility) is essential. To derive a stationary point, we propose a novel method, SZO- QQ, that iteratively constructs convex subproblems through local approximations of the unknown functions. These subproblems are easier to solve than those arising in Bayesian Optimization. We show that the iterates of SZO-QQ converge to the neighborhood of a stationary point. We also analyze the sample complexity needed to achieve a certain level of accuracy and demonstrate that SZO-QQ is more sample-efficient than log-barrier-based zeroth-order methods. To further enhance sample and computation efficiency, we propose SZO-LP, a variant of SZO-QQ that solves linear programs in each iteration. Experiments on an optimal power flow problem in a 30-bus grid highlight the scalability of our algorithms.

Détails

PDF