Regret Analysis of Online Gradient Descent-based Iterative Learning
Control with Model Mismatch
- URL: http://arxiv.org/abs/2204.04722v1
- Date: Sun, 10 Apr 2022 16:35:27 GMT
- Title: Regret Analysis of Online Gradient Descent-based Iterative Learning
Control with Model Mismatch
- Authors: Efe C. Balta, Andrea Iannelli, Roy S. Smith, John Lygeros
- Abstract summary: The performance of an online gradient-descent based scheme using inexact gradient information is analyzed.
Fundamental limitations of the scheme and its integration with adaptation mechanisms are investigated.
- Score: 4.922572106422331
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Iterative Learning Control (ILC), a sequence of feedforward control
actions is generated at each iteration on the basis of partial model knowledge
and past measurements with the goal of steering the system toward a desired
reference trajectory. This is framed here as an online learning task, where the
decision-maker takes sequential decisions by solving a sequence of optimization
problems having only partial knowledge of the cost functions. Having
established this connection, the performance of an online gradient-descent
based scheme using inexact gradient information is analyzed in the setting of
dynamic and static regret, standard measures in online learning. Fundamental
limitations of the scheme and its integration with adaptation mechanisms are
further investigated, followed by numerical simulations on a benchmark ILC
problem.
Related papers
- End-to-End Learning Framework for Solving Non-Markovian Optimal Control [9.156265463755807]
We propose an innovative system identification method control strategy for FOLTI systems.
We also develop the first end-to-end data-driven learning framework, Fractional-Order Learning for Optimal Control (FOLOC)
arXiv Detail & Related papers (2025-02-07T04:18:56Z) - What Really Matters for Learning-based LiDAR-Camera Calibration [50.2608502974106]
This paper revisits the development of learning-based LiDAR-Camera calibration.
We identify the critical limitations of regression-based methods with the widely used data generation pipeline.
We also investigate how the input data format and preprocessing operations impact network performance.
arXiv Detail & Related papers (2025-01-28T14:12:32Z) - Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification [76.14641982122696]
We propose a constraint learning schema for fine-tuning Large Language Models (LLMs) with attribute control.
We show that our approach leads to an LLM that produces fewer inappropriate responses while achieving competitive performance on benchmarks and a toxicity detection task.
arXiv Detail & Related papers (2024-10-07T23:38:58Z) - Integrating Reinforcement Learning and Model Predictive Control with Applications to Microgrids [14.389086937116582]
This work proposes an approach that integrates reinforcement learning and model predictive control (MPC) to solve optimal control problems in mixed-logical dynamical systems.
The proposed method significantly reduces the online computation time of the MPC approach and that it generates policies with small optimality gaps and high feasibility rates.
arXiv Detail & Related papers (2024-09-17T15:17:16Z) - MPC of Uncertain Nonlinear Systems with Meta-Learning for Fast Adaptation of Neural Predictive Models [6.031205224945912]
A neural State-Space Model (NSSM) is used to approximate the nonlinear system, where a deep encoder network learns the nonlinearity from data.
This transforms the nonlinear system into a linear system in a latent space, enabling the application of model predictive control (MPC) to determine effective control actions.
arXiv Detail & Related papers (2024-04-18T11:29:43Z) - Smoothed Online Learning for Prediction in Piecewise Affine Systems [43.64498536409903]
This paper builds on the recently developed smoothed online learning framework.
It provides the first algorithms for prediction and simulation in piecewise affine systems.
arXiv Detail & Related papers (2023-01-26T15:54:14Z) - Non-stationary Online Learning with Memory and Non-stochastic Control [71.14503310914799]
We study the problem of Online Convex Optimization (OCO) with memory, which allows loss functions to depend on past decisions.
In this paper, we introduce dynamic policy regret as the performance measure to design algorithms robust to non-stationary environments.
We propose a novel algorithm for OCO with memory that provably enjoys an optimal dynamic policy regret in terms of time horizon, non-stationarity measure, and memory length.
arXiv Detail & Related papers (2021-02-07T09:45:15Z) - Anticipating the Long-Term Effect of Online Learning in Control [75.6527644813815]
AntLer is a design algorithm for learning-based control laws that anticipates learning.
We show that AntLer approximates an optimal solution arbitrarily accurately with probability one.
arXiv Detail & Related papers (2020-07-24T07:00:14Z) - Logarithmic Regret Bound in Partially Observable Linear Dynamical
Systems [91.43582419264763]
We study the problem of system identification and adaptive control in partially observable linear dynamical systems.
We present the first model estimation method with finite-time guarantees in both open and closed-loop system identification.
We show that AdaptOn is the first algorithm that achieves $textpolylogleft(Tright)$ regret in adaptive control of unknown partially observable linear dynamical systems.
arXiv Detail & Related papers (2020-03-25T06:00:33Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.