Sample-Efficient Linear Regression with Self-Selection Bias
- URL: http://arxiv.org/abs/2402.14229v1
- Date: Thu, 22 Feb 2024 02:20:24 GMT
- Title: Sample-Efficient Linear Regression with Self-Selection Bias
- Authors: Jason Gaitonde and Elchanan Mossel
- Abstract summary: We consider the problem of linear regression with self-selection bias in the unknown-index setting.
We provide a novel and near optimally sample-efficient (in terms of $k$) algorithm to recover $mathbfw_1,ldots,mathbfw_kin.
Our algorithm succeeds under significantly relaxed noise assumptions, and therefore also succeeds in the related setting of max-linear regression.
- Score: 7.605563562103568
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of linear regression with self-selection bias in the
unknown-index setting, as introduced in recent work by Cherapanamjeri,
Daskalakis, Ilyas, and Zampetakis [STOC 2023]. In this model, one observes $m$
i.i.d. samples $(\mathbf{x}_{\ell},z_{\ell})_{\ell=1}^m$ where
$z_{\ell}=\max_{i\in [k]}\{\mathbf{x}_{\ell}^T\mathbf{w}_i+\eta_{i,\ell}\}$,
but the maximizing index $i_{\ell}$ is unobserved. Here, the
$\mathbf{x}_{\ell}$ are assumed to be $\mathcal{N}(0,I_n)$ and the noise
distribution $\mathbf{\eta}_{\ell}\sim \mathcal{D}$ is centered and independent
of $\mathbf{x}_{\ell}$. We provide a novel and near optimally sample-efficient
(in terms of $k$) algorithm to recover $\mathbf{w}_1,\ldots,\mathbf{w}_k\in
\mathbb{R}^n$ up to additive $\ell_2$-error $\varepsilon$ with polynomial
sample complexity $\tilde{O}(n)\cdot \mathsf{poly}(k,1/\varepsilon)$ and
significantly improved time complexity
$\mathsf{poly}(n,k,1/\varepsilon)+O(\log(k)/\varepsilon)^{O(k)}$. When
$k=O(1)$, our algorithm runs in $\mathsf{poly}(n,1/\varepsilon)$ time,
generalizing the polynomial guarantee of an explicit moment matching algorithm
of Cherapanamjeri, et al. for $k=2$ and when it is known that
$\mathcal{D}=\mathcal{N}(0,I_k)$. Our algorithm succeeds under significantly
relaxed noise assumptions, and therefore also succeeds in the related setting
of max-linear regression where the added noise is taken outside the maximum.
For this problem, our algorithm is efficient in a much larger range of $k$ than
the state-of-the-art due to Ghosh, Pananjady, Guntuboyina, and Ramchandran
[IEEE Trans. Inf. Theory 2022] for not too small $\varepsilon$, and leads to
improved algorithms for any $\varepsilon$ by providing a warm start for
existing local convergence methods.
Related papers
- Iterative thresholding for non-linear learning in the strong $\varepsilon$-contamination model [3.309767076331365]
We derive approximation bounds for learning single neuron models using thresholded descent.
We also study the linear regression problem, where $sigma(mathbfx) = mathbfx$.
arXiv Detail & Related papers (2024-09-05T16:59:56Z) - Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Efficient Continual Finite-Sum Minimization [52.5238287567572]
We propose a key twist into the finite-sum minimization, dubbed as continual finite-sum minimization.
Our approach significantly improves upon the $mathcalO(n/epsilon)$ FOs that $mathrmStochasticGradientDescent$ requires.
We also prove that there is no natural first-order method with $mathcalOleft(n/epsilonalpharight)$ complexity gradient for $alpha 1/4$, establishing that the first-order complexity of our method is nearly tight.
arXiv Detail & Related papers (2024-06-07T08:26:31Z) - Optimal Estimator for Linear Regression with Shuffled Labels [17.99906229036223]
This paper considers the task of linear regression with shuffled labels.
$mathbf Y in mathbb Rntimes m, mathbf Pi in mathbb Rntimes p, mathbf B in mathbb Rptimes m$, and $mathbf Win mathbb Rntimes m$, respectively.
arXiv Detail & Related papers (2023-10-02T16:44:47Z) - Matrix Completion in Almost-Verification Time [37.61139884826181]
We provide an algorithm which completes $mathbfM$ on $99%$ of rows and columns.
We show how to boost this partial completion guarantee to a full matrix completion algorithm.
arXiv Detail & Related papers (2023-08-07T15:24:49Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Active Sampling for Linear Regression Beyond the $\ell_2$ Norm [70.49273459706546]
We study active sampling algorithms for linear regression, which aim to query only a small number of entries of a target vector.
We show that this dependence on $d$ is optimal, up to logarithmic factors.
We also provide the first total sensitivity upper bound $O(dmax1,p/2log2 n)$ for loss functions with at most degree $p$ growth.
arXiv Detail & Related papers (2021-11-09T00:20:01Z) - Robust Gaussian Covariance Estimation in Nearly-Matrix Multiplication
Time [14.990725929840892]
We show an algorithm that runs in time $widetildeO(T(N, d) log kappa / mathrmpoly (varepsilon))$, where $T(N, d)$ is the time it takes to multiply a $d times N$ matrix by its transpose.
Our runtime matches that of the fastest algorithm for covariance estimation without outliers, up to poly-logarithmic factors.
arXiv Detail & Related papers (2020-06-23T20:21:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.