On the Convergence of Overlapping Schwarz Decomposition for Nonlinear
Optimal Control
- URL: http://arxiv.org/abs/2005.06674v5
- Date: Tue, 15 Mar 2022 01:43:57 GMT
- Title: On the Convergence of Overlapping Schwarz Decomposition for Nonlinear
Optimal Control
- Authors: Sen Na, Sungho Shin, Mihai Anitescu, Victor M. Zavala
- Abstract summary: We study the convergence properties of an overlapping decomposition algorithm for solving nonlinear Schwarz problems.
We show that the algorithm exhibits local linear convergence, and that the convergence rate improves exponentially with the overlap size.
- Score: 7.856998585396421
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the convergence properties of an overlapping Schwarz decomposition
algorithm for solving nonlinear optimal control problems (OCPs). The algorithm
decomposes the time domain into a set of overlapping subdomains, and solves all
subproblems defined over subdomains in parallel. The convergence is attained by
updating primal-dual information at the boundaries of overlapping subdomains.
We show that the algorithm exhibits local linear convergence, and that the
convergence rate improves exponentially with the overlap size. We also
establish global convergence results for a general quadratic programming, which
enables the application of the Schwarz scheme inside second-order optimization
algorithms (e.g., sequential quadratic programming). The theoretical foundation
of our convergence analysis is a sensitivity result of nonlinear OCPs, which we
call "exponential decay of sensitivity" (EDS). Intuitively, EDS states that the
impact of perturbations at domain boundaries (i.e. initial and terminal time)
on the solution decays exponentially as one moves into the domain. Here, we
expand a previous analysis available in the literature by showing that EDS
holds for both primal and dual solutions of nonlinear OCPs, under uniform
second-order sufficient condition, controllability condition, and boundedness
condition. We conduct experiments with a quadrotor motion planning problem and
a PDE control problem to validate our theory; and show that the approach is
significantly more efficient than ADMM and as efficient as the centralized
solver Ipopt.
Related papers
- Non-overlapping, Schwarz-type Domain Decomposition Method for Physics and Equality Constrained Artificial Neural Networks [0.24578723416255746]
We present a non-overlapping, Schwarz-type domain decomposition method with a generalized interface condition.
Our approach employs physics and equality-constrained artificial neural networks (PECANN) within each subdomain.
A distinct advantage our domain decomposition method is its ability to learn solutions to both Poisson's and Helmholtz equations.
arXiv Detail & Related papers (2024-09-20T16:48:55Z) - Convergence and Complexity Guarantee for Inexact First-order Riemannian Optimization Algorithms [18.425648833592312]
We show that tBMM converges to an $ilon$-stationary point within $O(epsilon-2)$.
Under a mild iteration, the results still hold when the subproblem is solved in iterations in each provided the total optimality gap is bounded.
arXiv Detail & Related papers (2024-05-05T22:53:14Z) - Decentralized Sum-of-Nonconvex Optimization [42.04181488477227]
We consider the optimization problem of the sum-of-non function, i.e., a guarantee function that is the average non-consensus number.
We propose an accelerated decentralized first-order algorithm by techniques of gradient, tracking into the rate, and multi-consensus.
arXiv Detail & Related papers (2024-02-04T05:48:45Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Can Decentralized Stochastic Minimax Optimization Algorithms Converge
Linearly for Finite-Sum Nonconvex-Nonconcave Problems? [56.62372517641597]
Decentralized minimax optimization has been actively studied in the past few years due to its application in a wide range machine learning.
This paper develops two novel decentralized minimax optimization algorithms for the non-strongly-nonconcave problem.
arXiv Detail & Related papers (2023-04-24T02:19:39Z) - Rank-1 Matrix Completion with Gradient Descent and Small Random
Initialization [15.127728811011245]
We show that implicit regularization of GD plays a critical role in analysis.
We observe that implicit regularization GD plays a critical role in affordable analysis.
arXiv Detail & Related papers (2022-12-19T12:05:37Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Linear Convergent Decentralized Optimization with Compression [50.44269451541387]
Existing decentralized algorithms with compression mainly focus on compressing DGD-type algorithms.
Motivated by primal-dual algorithms, this paper proposes first underlineLinunderlineEAr convergent.
underlineDecentralized with compression, LEAD.
arXiv Detail & Related papers (2020-07-01T04:35:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.