On damage of interpolation to adversarial robustness in regression
- URL: http://arxiv.org/abs/2601.16070v1
- Date: Thu, 22 Jan 2026 16:09:00 GMT
- Title: On damage of interpolation to adversarial robustness in regression
- Authors: Jingfu Peng, Yuhong Yang,
- Abstract summary: We investigate the adversarial robustness of interpolating estimators in a framework of nonparametric regression.<n>A finding is that interpolating estimators must be suboptimal even under a subtle future $X$-attack.<n>An interesting phenomenon in the high regime, which we term the curse of simple size, is also revealed and discussed.
- Score: 3.934085474465338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) typically involve a large number of parameters and are trained to achieve zero or near-zero training error. Despite such interpolation, they often exhibit strong generalization performance on unseen data, a phenomenon that has motivated extensive theoretical investigations. Comforting results show that interpolation indeed may not affect the minimax rate of convergence under the squared error loss. In the mean time, DNNs are well known to be highly vulnerable to adversarial perturbations in future inputs. A natural question then arises: Can interpolation also escape from suboptimal performance under a future $X$-attack? In this paper, we investigate the adversarial robustness of interpolating estimators in a framework of nonparametric regression. A finding is that interpolating estimators must be suboptimal even under a subtle future $X$-attack, and achieving perfect fitting can substantially damage their robustness. An interesting phenomenon in the high interpolation regime, which we term the curse of simple size, is also revealed and discussed. Numerical experiments support our theoretical findings.
Related papers
- Unregularized Linear Convergence in Zero-Sum Game from Preference Feedback [50.89125374999765]
We provide the first convergence guarantee for Optimistic Multiplicative Weights Update ($mathtOMWU$) in NLHF.<n>Our analysis identifies a novel marginal convergence behavior, where the probability of rarely played actions grows exponentially from exponentially small values.
arXiv Detail & Related papers (2025-12-31T12:08:29Z) - Towards Interpretable Adversarial Examples via Sparse Adversarial Attack [22.588476144401977]
Sparse attacks are to optimize the magnitude of adversarial perturbations for fooling deep neural networks (DNNs)<n>Existing solutions fail to yield interpretable adversarial examples due to their poor sparsity.<n>In this paper, we aim to develop a sparse attack for understanding the vulnerability of CNNs by minimizing the magnitude of initial perturbations.
arXiv Detail & Related papers (2025-06-08T09:13:30Z) - Robust deep learning from weakly dependent data [0.0]
This paper considers robust deep learning from weakly dependent observations, with unbounded loss function and unbounded input/output.
We derive a relationship between these bounds and $r$, and when the data have moments of any order (that is $r=infty$), the convergence rate is close to some well-known results.
arXiv Detail & Related papers (2024-05-08T14:25:40Z) - Can overfitted deep neural networks in adversarial training generalize?
-- An approximation viewpoint [25.32729343174394]
Adrial training is a widely used method to improve the robustness of deep neural networks (DNNs) over adversarial perturbations.
In this paper, we provide a theoretical understanding of whether overfitted DNNs in adversarial training can generalize from an approximation viewpoint.
arXiv Detail & Related papers (2024-01-24T17:54:55Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Benefit of Interpolation in Nearest Neighbor Algorithms [21.79888306754263]
In some studies, it is observed that over-parametrized deep neural networks achieve a small testing error even when the training error is almost zero.
We turn into another way to enforce zero training error (without over-parametrization) through a data mechanism.
arXiv Detail & Related papers (2022-02-23T22:47:18Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Network Moments: Extensions and Sparse-Smooth Attacks [59.24080620535988]
We derive exact analytic expressions for the first and second moments of a small piecewise linear (PL) network (Affine, ReLU, Affine) subject to Gaussian input.
We show that the new variance expression can be efficiently approximated leading to much tighter variance estimates.
arXiv Detail & Related papers (2020-06-21T11:36:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.