Tracking solutions of time-varying variational inequalities
- URL: http://arxiv.org/abs/2406.14059v1
- Date: Thu, 20 Jun 2024 07:32:07 GMT
- Title: Tracking solutions of time-varying variational inequalities
- Authors: Hédi Hadiji, Sarah Sachs, Cristóbal Guzmán,
- Abstract summary: Tracking the solution of time-varying variational inequalities is an important problem with applications in game theory, optimization, and machine learning.
We extend existing results in two ways: In our first result, we provide tracking bounds for variational inequalities with a sublinear solution path but not necessarily monotone functions.
Our second main contribution is an extensive study of the convergence behavior and trajectory of discrete dynamical systems of periodic time-varying VI.
- Score: 13.538780148454208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tracking the solution of time-varying variational inequalities is an important problem with applications in game theory, optimization, and machine learning. Existing work considers time-varying games or time-varying optimization problems. For strongly convex optimization problems or strongly monotone games, these results provide tracking guarantees under the assumption that the variation of the time-varying problem is restrained, that is, problems with a sublinear solution path. In this work we extend existing results in two ways: In our first result, we provide tracking bounds for (1) variational inequalities with a sublinear solution path but not necessarily monotone functions, and (2) for periodic time-varying variational inequalities that do not necessarily have a sublinear solution path-length. Our second main contribution is an extensive study of the convergence behavior and trajectory of discrete dynamical systems of periodic time-varying VI. We show that these systems can exhibit provably chaotic behavior or can converge to the solution. Finally, we illustrate our theoretical results with experiments.
Related papers
- Consistent Submodular Maximization [27.266085572522847]
maximizing monotone submodular functions under cardinality constraints is a classic optimization task with several applications in data mining and machine learning.
In this paper we study this problem in a dynamic environment with consistency constraints: elements arrive in a streaming fashion and the goal is maintaining a constant approximation to the optimal solution while having a stable solution.
We provide algorithms in this setting with different trade-offs between consistency and approximation quality.
arXiv Detail & Related papers (2024-05-30T11:59:58Z) - Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - Exact Enforcement of Temporal Continuity in Sequential Physics-Informed
Neural Networks [0.0]
We introduce a method to enforce continuity between successive time segments via a solution ansatz.
The method is tested for a number of benchmark problems involving both linear and non-linear PDEs.
The numerical experiments conducted with the proposed method demonstrated superior convergence and accuracy over both traditional PINNs and the soft-constrained counterparts.
arXiv Detail & Related papers (2024-02-15T17:41:02Z) - Learning solutions to some toy constrained optimization problems in
infinite dimensional Hilbert spaces [0.0]
We present implementations of two popular theoretical constrained optimization algorithms in infinite dimensional Hilbert spaces.
We demonstrate that both methods are able to produce decent approximations for the test problems and are comparable in terms of different errors produced.
arXiv Detail & Related papers (2024-01-02T17:32:53Z) - Differentiating Nonsmooth Solutions to Parametric Monotone Inclusion
Problems [0.0]
A direct consequence of our result is that these solutions happen to be differentiable almost everywhere.
Our approach is fully compatible with automatic differentiation and comes with assumptions which are easy to check.
arXiv Detail & Related papers (2022-12-15T14:05:32Z) - SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
Cocoercive Variational Inequalities [137.6408511310322]
We consider the problem of finite-sum cocoercive variational inequalities.
For strongly monotone problems it is possible to achieve linear convergence to a solution using this method.
arXiv Detail & Related papers (2022-10-12T08:04:48Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Continuous-time Analysis for Variational Inequalities: An Overview and
Desiderata [87.77379512999818]
We provide an overview of recent progress in the use of continuous-time perspectives in the analysis and design of methods targeting the broad VI problem class.
Our presentation draws parallels between single-objective problems and multi-objective problems, highlighting the challenges of the latter.
We also formulate various desiderata for algorithms that apply to general VIs and we argue that achieving these desiderata may profit from an understanding of the associated continuous-time dynamics.
arXiv Detail & Related papers (2022-07-14T17:58:02Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Two-Stage Stochastic Optimization via Primal-Dual Decomposition and Deep
Unrolling [86.85697555068168]
Two-stage algorithmic optimization plays a critical role in various engineering and scientific applications.
There still lack efficient algorithms, especially when the long-term and short-term variables are coupled in the constraints.
We show that PDD-SSCA can achieve superior performance over existing solutions.
arXiv Detail & Related papers (2021-05-05T03:36:00Z) - Transient growth of accelerated first-order methods for strongly convex
optimization problems [1.6114012813668934]
In this paper, we examine the transient behavior of accelerated first-order optimization algorithms.
For quadratic optimization problems, we employ tools from linear systems theory to show that transient growth arises from the presence of non-normal dynamics.
For strongly convex smooth optimization problems, we utilize the theory of integral quadratic constraints to establish an upper bound on the magnitude of the transient response of Nesterov's accelerated method.
arXiv Detail & Related papers (2021-03-14T20:01:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.