A successive approximation method in functional spaces for hierarchical optimal control problems and its application to learning
- URL: http://arxiv.org/abs/2410.20617v1
- Date: Sun, 27 Oct 2024 22:28:07 GMT
- Title: A successive approximation method in functional spaces for hierarchical optimal control problems and its application to learning
- Authors: Getachew K. Befekadu,
- Abstract summary: We consider a class of learning problem of point estimation for modeling high-dimensional nonlinear functions.
The estimated parameter in due course provides an acceptable prediction accuracy on a different model validation dataset.
We provide a framework how to account appropriately for both generalization and regularization at the optimization stage.
- Score: 0.0
- License:
- Abstract: We consider a class of learning problem of point estimation for modeling high-dimensional nonlinear functions, whose learning dynamics is guided by model training dataset, while the estimated parameter in due course provides an acceptable prediction accuracy on a different model validation dataset. Here, we establish an evidential connection between such a learning problem and a hierarchical optimal control problem that provides a framework how to account appropriately for both generalization and regularization at the optimization stage. In particular, we consider the following two objectives: (i) The first one is a controllability-type problem, i.e., generalization, which consists of guaranteeing the estimated parameter to reach a certain target set at some fixed final time, where such a target set is associated with model validation dataset. (ii) The second one is a regularization-type problem ensuring the estimated parameter trajectory to satisfy some regularization property over a certain finite time interval. First, we partition the control into two control strategies that are compatible with two abstract agents, namely, a leader, which is responsible for the controllability-type problem and that of a follower, which is associated with the regularization-type problem. Using the notion of Stackelberg's optimization, we provide conditions on the existence of admissible optimal controls for such a hierarchical optimal control problem under which the follower is required to respond optimally to the strategy of the leader, so as to achieve the overall objectives that ultimately leading to an optimal parameter estimate. Moreover, we provide a nested algorithm, arranged in a hierarchical structure-based on successive approximation methods, for solving the corresponding optimal control problem. Finally, we present some numerical results for a typical nonlinear regression problem.
Related papers
- Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Embedding generalization within the learning dynamics: An approach based-on sample path large deviation theory [0.0]
We consider an empirical risk perturbation based learning problem that exploits methods from continuous-time perspective.
We provide an estimate in the small noise limit based on the Freidlin-Wentzell theory of large deviations.
We also present a computational algorithm that solves the corresponding variational problem leading to an optimal point estimates.
arXiv Detail & Related papers (2024-08-04T23:31:35Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning [132.7040981721302]
We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
arXiv Detail & Related papers (2024-02-16T16:35:18Z) - High-probability sample complexities for policy evaluation with linear function approximation [88.87036653258977]
We investigate the sample complexities required to guarantee a predefined estimation error of the best linear coefficients for two widely-used policy evaluation algorithms.
We establish the first sample complexity bound with high-probability convergence guarantee that attains the optimal dependence on the tolerance level.
arXiv Detail & Related papers (2023-05-30T12:58:39Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - Probabilistic Control and Majorization of Optimal Control [3.2634122554914002]
Probabilistic control design is founded on the principle that a rational agent attempts to match modelled with an arbitrary desired closed-loop system trajectory density.
In this work we introduce an alternative parametrization of desired closed-loop behaviour and explore alternative proximity measures between densities.
arXiv Detail & Related papers (2022-05-06T15:04:12Z) - Improving Hyperparameter Optimization by Planning Ahead [3.8673630752805432]
We propose a novel transfer learning approach, defined within the context of model-based reinforcement learning.
We propose a new variant of model predictive control which employs a simple look-ahead strategy as a policy.
Our experiments on three meta-datasets comparing to state-of-the-art HPO algorithms show that the proposed method can outperform all baselines.
arXiv Detail & Related papers (2021-10-15T11:46:14Z) - Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive
Control [0.0]
We show that the principal AlphaZero/TDGammon ideas of approximation in value space and rollout apply very broadly to deterministic and optimal control problems.
These ideas can be effectively integrated with other important methodologies such as model control, adaptive control, decentralized control, and neural network-based value and policy approximations.
arXiv Detail & Related papers (2021-08-20T19:17:35Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - Robust-Adaptive Control of Linear Systems: beyond Quadratic Costs [14.309243378538012]
We consider the problem of robust and adaptive model predictive control (MPC) of a linear system.
We provide the first end-to-end suboptimal tractity analysis for this setting.
arXiv Detail & Related papers (2020-02-25T12:24:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.