Data-Driven Minimax Optimization with Expectation Constraints
- URL: http://arxiv.org/abs/2202.07868v1
- Date: Wed, 16 Feb 2022 05:23:27 GMT
- Title: Data-Driven Minimax Optimization with Expectation Constraints
- Authors: Shuoguang Yang, Xudong Li, Guanghui Lan
- Abstract summary: We propose a class of efficient primal-dual algorithms to tackle the minimax expectation-constrained problem.
We show that our algorithms converge at the optimal rate of $mathcalO(frac1sqrtN)$.
- Score: 9.373649142701803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Attention to data-driven optimization approaches, including the well-known
stochastic gradient descent method, has grown significantly over recent
decades, but data-driven constraints have rarely been studied, because of the
computational challenges of projections onto the feasible set defined by these
hard constraints. In this paper, we focus on the non-smooth convex-concave
stochastic minimax regime and formulate the data-driven constraints as
expectation constraints. The minimax expectation constrained problem subsumes a
broad class of real-world applications, including two-player zero-sum game and
data-driven robust optimization. We propose a class of efficient primal-dual
algorithms to tackle the minimax expectation-constrained problem, and show that
our algorithms converge at the optimal rate of
$\mathcal{O}(\frac{1}{\sqrt{N}})$. We demonstrate the practical efficiency of
our algorithms by conducting numerical experiments on large-scale real-world
applications.
Related papers
- Low-Rank Extragradient Methods for Scalable Semidefinite Optimization [0.0]
We focus on high-dimensional and plausible settings in which the problem admits a low-rank solution.
We provide several theoretical results proving that, under these circumstances, the well-known Extragradient method converges to a solution of the constrained optimization problem.
arXiv Detail & Related papers (2024-02-14T10:48:00Z) - Can Decentralized Stochastic Minimax Optimization Algorithms Converge
Linearly for Finite-Sum Nonconvex-Nonconcave Problems? [56.62372517641597]
Decentralized minimax optimization has been actively studied in the past few years due to its application in a wide range machine learning.
This paper develops two novel decentralized minimax optimization algorithms for the non-strongly-nonconcave problem.
arXiv Detail & Related papers (2023-04-24T02:19:39Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Zeroth-Order Methods for Convex-Concave Minmax Problems: Applications to
Decision-Dependent Risk Minimization [12.742028139557384]
We propose a random reshuffling-based gradient free Optimistic Gradient Descent-Ascent algorithm for solving convex-concave min-max problems with finite sum structure.
We prove that the algorithm enjoys the same convergence rate as that of zeroth-order algorithms for convex minimization problems.
arXiv Detail & Related papers (2021-06-16T18:49:59Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained
Optimization [25.73397307080647]
We present a new model of a general convex or non objective machine machine objectives.
We propose an algorithm that solves a constraint with gradually relaxed point levels of each subproblem.
We demonstrate the effectiveness of our new numerical scale problems.
arXiv Detail & Related papers (2020-10-23T05:24:05Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z) - Nonconvex sparse regularization for deep neural networks and its
optimality [1.9798034349981162]
Deep neural network (DNN) estimators can attain optimal convergence rates for regression and classification problems.
We propose a novel penalized estimation method for sparse DNNs.
We prove that the sparse-penalized estimator can adaptively attain minimax convergence rates for various nonparametric regression problems.
arXiv Detail & Related papers (2020-03-26T07:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.