A first-order method for constrained nonconvex--nonconcave minimax problems under a local Kurdyka-Łojasiewicz condition
- URL: http://arxiv.org/abs/2510.01168v2
- Date: Sat, 04 Oct 2025 13:35:01 GMT
- Title: A first-order method for constrained nonconvex--nonconcave minimax problems under a local Kurdyka-Łojasiewicz condition
- Authors: Zhaosong Lu, Xiangyuan Wang,
- Abstract summary: We study a class of constrained non-nonconcave minimax problems in which the inner involves potentially complex constraints.<n>We show the maximal function of the original problem enjoys a local H"older property smoothness.<n>We propose a method for solving constrained optimization problems and establish its convergence under a local KL condition.
- Score: 5.834528275910258
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study a class of constrained nonconvex--nonconcave minimax problems in which the inner maximization involves potentially complex constraints. Under the assumption that the inner problem of a novel lifted minimax problem satisfies a local Kurdyka-{\L}ojasiewicz (KL) condition, we show that the maximal function of the original problem enjoys a local H\"older smoothness property. We also propose a sequential convex programming (SCP) method for solving constrained optimization problems and establish its convergence rate under a local KL condition. Leveraging these results, we develop an inexact proximal gradient method for the original minimax problem, where the inexact gradient of the maximal function is computed via the SCP method applied to a locally KL-structured subproblem. Finally, we establish complexity guarantees for the proposed method in computing an approximate stationary point of the original minimax problem.
Related papers
- A first-order method for nonconvex-nonconcave minimax problems under a local Kurdyka-Łojasiewicz condition [1.534667887016089]
We study a class of non-nonconcave minimax problems in which the inner problem a local Kurdyka-Lojasiewicz (KL) condition varies with the outer minimization variable.<n>In particular, as an algorithm progresses toward a stationary point of the problem, the region over which the KL condition holds may shrink, resulting in a more potentially ill-conditioned landscape.
arXiv Detail & Related papers (2025-07-02T17:45:10Z) - Inexact Moreau Envelope Lagrangian Method for Non-Convex Constrained Optimization under Local Error Bound Conditions on Constraint Functions [20.767753336718606]
We present the inexact envelopegrangian (iMELa) method for solving optimization problems.<n>We establish that the iMELa method can find an $epsilon$-Karush-Kuhn-ilon point with $tilde(-2)$ gradient complexity.
arXiv Detail & Related papers (2025-02-27T05:04:27Z) - Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Stochastic Optimization with Constraints: A Non-asymptotic Instance-Dependent Analysis [2.1756081703276]
We analyze the behavior of a natural variance reduced proximal gradient (VRPG) algorithm for convex optimization under convex constraints.
Our main result is a non-asymptotic guarantee for VRPG algorithm.
We show that our guarantee captures the complexity of the loss function, the variability of the noise, and the geometry of the constraint set.
arXiv Detail & Related papers (2024-03-24T14:45:11Z) - Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - Coordinate Descent Methods for Fractional Minimization [7.716156977428555]
We consider a class of structured fractional convex problems in which the numerator part objective is the sum of a differentiable convex non-linear function.
This problem is difficult since it is non-smooth convergence.
We propose two methods for solving this problem.
arXiv Detail & Related papers (2022-01-30T00:47:04Z) - Minimax Optimization: The Case of Convex-Submodular [50.03984152441271]
Minimax problems extend beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains.
We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable.
Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization.
arXiv Detail & Related papers (2021-11-01T21:06:35Z) - Distributed Saddle-Point Problems: Lower Bounds, Near-Optimal and Robust Algorithms [91.96505642426833]
This paper focuses on the distributed optimization of saddle point problems.<n>In particular, we show the effectiveness of our method in training GANs in a distributed manner.
arXiv Detail & Related papers (2020-10-25T13:13:44Z) - A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained
Optimization [25.73397307080647]
We present a new model of a general convex or non objective machine machine objectives.
We propose an algorithm that solves a constraint with gradually relaxed point levels of each subproblem.
We demonstrate the effectiveness of our new numerical scale problems.
arXiv Detail & Related papers (2020-10-23T05:24:05Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.