Universal Regression with Adversarial Responses
- URL: http://arxiv.org/abs/2203.05067v3
- Date: Sat, 10 Jun 2023 01:22:34 GMT
- Title: Universal Regression with Adversarial Responses
- Authors: Mo\"ise Blanchard, Patrick Jaillet
- Abstract summary: We provide algorithms for regression with adversarial responses under large classes of non-i.i.d. instance sequences.
We consider universal consistency which asks for strong consistency of a learner without restrictions on the value responses.
- Score: 26.308541799686505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We provide algorithms for regression with adversarial responses under large
classes of non-i.i.d. instance sequences, on general separable metric spaces,
with provably minimal assumptions. We also give characterizations of
learnability in this regression context. We consider universal consistency
which asks for strong consistency of a learner without restrictions on the
value responses. Our analysis shows that such an objective is achievable for a
significantly larger class of instance sequences than stationary processes, and
unveils a fundamental dichotomy between value spaces: whether finite-horizon
mean estimation is achievable or not. We further provide optimistically
universal learning rules, i.e., such that if they fail to achieve universal
consistency, any other algorithms will fail as well. For unbounded losses, we
propose a mild integrability condition under which there exist algorithms for
adversarial regression under large classes of non-i.i.d. instance sequences. In
addition, our analysis also provides a learning rule for mean estimation in
general metric spaces that is consistent under adversarial responses without
any moment conditions on the sequence, a result of independent interest.
Related papers
- Fine-grained analysis of non-parametric estimation for pairwise learning [9.676007573960383]
We are concerned with the generalization performance of non-parametric estimation for pairwise learning.
Our results can be used to handle a wide range of pairwise learning problems including ranking, AUC, pairwise regression and metric and similarity learning.
arXiv Detail & Related papers (2023-05-31T08:13:14Z) - Agnostic Multi-Robust Learning Using ERM [19.313739782029185]
A fundamental problem in robust learning is asymmetry: a learner needs to correctly classify every one of exponentially-many perturbations that an adversary might make to a test-time natural example.
In contrast, the attacker only needs to find one successful perturbation.
We introduce a novel multi-group setting and introduce a novel multi-robust learning problem.
arXiv Detail & Related papers (2023-03-15T21:30:14Z) - Generalization Analysis for Contrastive Representation Learning [80.89690821916653]
Existing generalization error bounds depend linearly on the number $k$ of negative examples.
We establish novel generalization bounds for contrastive learning which do not depend on $k$, up to logarithmic terms.
arXiv Detail & Related papers (2023-02-24T01:03:56Z) - Contextual Bandits and Optimistically Universal Learning [32.14208422566497]
We focus on consistency -- vanishing regret compared to the optimal policy.
We show that for large classes of non-i.i.d. contexts, consistency can be achieved regardless of the time-invariant reward mechanism.
arXiv Detail & Related papers (2022-12-31T16:15:28Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - Root-finding Approaches for Computing Conformal Prediction Set [18.405645120971496]
Conformal prediction constructs a confidence region for an unobserved response of a feature vector based on previous identically distributed and exchangeable observations.
We exploit the fact that, emphoften, conformal prediction sets are intervals whose boundaries can be efficiently approximated by classical root-finding software.
arXiv Detail & Related papers (2021-04-14T06:41:12Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Dimension Free Generalization Bounds for Non Linear Metric Learning [61.193693608166114]
We provide uniform generalization bounds for two regimes -- the sparse regime, and a non-sparse regime.
We show that by relying on a different, new property of the solutions, it is still possible to provide dimension free generalization guarantees.
arXiv Detail & Related papers (2021-02-07T14:47:00Z) - Implicit Regularization in ReLU Networks with the Square Loss [56.70360094597169]
We show that it is impossible to characterize the implicit regularization with the square loss by any explicit function of the model parameters.
Our results suggest that a more general framework may be needed to understand implicit regularization for nonlinear predictors.
arXiv Detail & Related papers (2020-12-09T16:48:03Z) - Failures of model-dependent generalization bounds for least-norm
interpolation [39.97534972432276]
We consider bounds on the generalization performance of the least-norm linear regressor.
For a variety of natural joint distributions on training examples, any valid generalization bound must sometimes be very loose.
arXiv Detail & Related papers (2020-10-16T16:30:05Z) - Beyond UCB: Optimal and Efficient Contextual Bandits with Regression
Oracles [112.89548995091182]
We provide the first universal and optimal reduction from contextual bandits to online regression.
Our algorithm requires no distributional assumptions beyond realizability, and works even when contexts are chosen adversarially.
arXiv Detail & Related papers (2020-02-12T11:33:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.