Stochastic Approximation Approaches to Group Distributionally Robust
Optimization
- URL: http://arxiv.org/abs/2302.09267v4
- Date: Thu, 4 Jan 2024 03:42:11 GMT
- Title: Stochastic Approximation Approaches to Group Distributionally Robust
Optimization
- Authors: Lijun Zhang, Peng Zhao, Zhen-Hua Zhuang, Tianbao Yang, Zhi-Hua Zhou
- Abstract summary: Group distributionally robust optimization (GDRO)
Online learning techniques to reduce the number of samples required in each round from $m$ to $1$, keeping the same sample.
A novel formulation of weighted GDRO, which allows us to derive distribution-dependent convergence rates.
- Score: 96.26317627118912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates group distributionally robust optimization (GDRO),
with the purpose to learn a model that performs well over $m$ different
distributions. First, we formulate GDRO as a stochastic convex-concave
saddle-point problem, and demonstrate that stochastic mirror descent (SMD),
using $m$ samples in each iteration, achieves an $O(m (\log m)/\epsilon^2)$
sample complexity for finding an $\epsilon$-optimal solution, which matches the
$\Omega(m/\epsilon^2)$ lower bound up to a logarithmic factor. Then, we make
use of techniques from online learning to reduce the number of samples required
in each round from $m$ to $1$, keeping the same sample complexity.
Specifically, we cast GDRO as a two-players game where one player simply
performs SMD and the other executes an online algorithm for non-oblivious
multi-armed bandits. Next, we consider a more practical scenario where the
number of samples that can be drawn from each distribution is different, and
propose a novel formulation of weighted GDRO, which allows us to derive
distribution-dependent convergence rates. Denote by $n_i$ the sample budget for
the $i$-th distribution, and assume $n_1 \geq n_2 \geq \cdots \geq n_m$. In the
first approach, we incorporate non-uniform sampling into SMD such that the
sample budget is satisfied in expectation, and prove that the excess risk of
the $i$-th distribution decreases at an $O(\sqrt{n_1 \log m}/n_i)$ rate. In the
second approach, we use mini-batches to meet the budget exactly and also reduce
the variance in stochastic gradients, and then leverage stochastic mirror-prox
algorithm, which can exploit small variances, to optimize a carefully designed
weighted GDRO problem. Under appropriate conditions, it attains an $O((\log
m)/\sqrt{n_i})$ convergence rate, which almost matches the optimal
$O(\sqrt{1/n_i})$ rate of only learning from the $i$-th distribution with $n_i$
samples.
Related papers
- Sum-of-squares lower bounds for Non-Gaussian Component Analysis [33.80749804695003]
Non-Gaussian Component Analysis (NGCA) is the statistical task of finding a non-Gaussian direction in a high-dimensional dataset.
Here we study the complexity of NGCA in the Sum-of-Squares framework.
arXiv Detail & Related papers (2024-10-28T18:19:13Z) - Robust Distribution Learning with Local and Global Adversarial Corruptions [17.22168727622332]
We develop an efficient finite-sample algorithm with error bounded by $sqrtvarepsilon k + rho + tildeO(dsqrtkn-1/(k lor 2))$ when $P$ has bounded covariance.
Our efficient procedure relies on a novel trace norm approximation of an ideal yet intractable 2-Wasserstein projection estimator.
arXiv Detail & Related papers (2024-06-10T17:48:36Z) - Efficient Algorithms for Empirical Group Distributional Robust
Optimization and Beyond [15.664414751701718]
We formulate empirical GDRO as a $textittwo-level$ finite-sum convex-concave minimax optimization problem.
We compute the snapshot and mirror snapshot point by a one-index-shifted weighted average, which distinguishes us from the naive ergodic average.
Remarkably, our approach outperforms the state-of-the-art method by a factor of $sqrtm$.
arXiv Detail & Related papers (2024-03-06T09:14:24Z) - Replicable Clustering [57.19013971737493]
We propose algorithms for the statistical $k$-medians, statistical $k$-means, and statistical $k$-centers problems by utilizing approximation routines for their counterparts in a black-box manner.
We also provide experiments on synthetic distributions in 2D using the $k$-means++ implementation from sklearn as a black-box that validate our theoretical results.
arXiv Detail & Related papers (2023-02-20T23:29:43Z) - Perfect Sampling from Pairwise Comparisons [26.396901523831534]
We study how to efficiently obtain perfect samples from a discrete distribution $mathcalD$ given access only to pairwise comparisons of elements of its support.
We design a Markov chain whose stationary distribution coincides with $mathcalD$ and give an algorithm to obtain exact samples using the technique of Coupling from the Past.
arXiv Detail & Related papers (2022-11-23T11:20:30Z) - Robust Sparse Mean Estimation via Sum of Squares [42.526664955704746]
We study the problem of high-dimensional sparse mean estimation in the presence of an $epsilon$-fraction of adversarial outliers.
Our algorithms follow the Sum-of-Squares based, to algorithms approach.
arXiv Detail & Related papers (2022-06-07T16:49:54Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - FANOK: Knockoffs in Linear Time [73.5154025911318]
We describe a series of algorithms that efficiently implement Gaussian model-X knockoffs to control the false discovery rate on large scale feature selection problems.
We test our methods on problems with $p$ as large as $500,000$.
arXiv Detail & Related papers (2020-06-15T21:55:34Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z) - Locally Private Hypothesis Selection [96.06118559817057]
We output a distribution from $mathcalQ$ whose total variation distance to $p$ is comparable to the best such distribution.
We show that the constraint of local differential privacy incurs an exponential increase in cost.
Our algorithms result in exponential improvements on the round complexity of previous methods.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.