When Flatness Does (Not) Guarantee Adversarial Robustness
- URL: http://arxiv.org/abs/2510.14231v1
- Date: Thu, 16 Oct 2025 02:15:14 GMT
- Title: When Flatness Does (Not) Guarantee Adversarial Robustness
- Authors: Nils Philipp Walter, Linara Adilova, Jilles Vreeken, Michael Kamp,
- Abstract summary: A longstanding hypothesis suggests that flat minima, regions of low curvature in the loss landscape, offer increased robustness.<n>We show this is only partially correct: flatness implies local but not global adversarial robustness.<n>To maintain robustness beyond a local neighborhood, the loss needs to curve sharply away from the data manifold.
- Score: 36.37081978915738
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Despite their empirical success, neural networks remain vulnerable to small, adversarial perturbations. A longstanding hypothesis suggests that flat minima, regions of low curvature in the loss landscape, offer increased robustness. While intuitive, this connection has remained largely informal and incomplete. By rigorously formalizing the relationship, we show this intuition is only partially correct: flatness implies local but not global adversarial robustness. To arrive at this result, we first derive a closed-form expression for relative flatness in the penultimate layer, and then show we can use this to constrain the variation of the loss in input space. This allows us to formally analyze the adversarial robustness of the entire network. We then show that to maintain robustness beyond a local neighborhood, the loss needs to curve sharply away from the data manifold. We validate our theoretical predictions empirically across architectures and datasets, uncovering the geometric structure that governs adversarial vulnerability, and linking flatness to model confidence: adversarial examples often lie in large, flat regions where the model is confidently wrong. Our results challenge simplified views of flatness and provide a nuanced understanding of its role in robustness.
Related papers
- The Inductive Bias of Convolutional Neural Networks: Locality and Weight Sharing Reshape Implicit Regularization [57.37943479039033]
We study how architectural inductive bias reshapes the implicit regularization induced by the edge-of-stability phenomenon in gradient descent.<n>We show that locality and weight sharing fundamentally change this picture.
arXiv Detail & Related papers (2026-03-05T04:50:51Z) - Generalization Below the Edge of Stability: The Role of Data Geometry [60.147710896851045]
We show how data geometry controls generalization in ReLU networks trained below the edge of stability.<n>For data distributions supported on a mixture of low-dimensional balls, we derive generalization bounds that provably adapt to the intrinsic dimension.<n>Our results consolidate disparate empirical findings that have appeared in the literature.
arXiv Detail & Related papers (2025-10-20T21:40:36Z) - Stable Minima of ReLU Neural Networks Suffer from the Curse of Dimensionality: The Neural Shattering Phenomenon [25.998397575754865]
We study the implicit bias of flatness / low (loss) curvature and its effects on generalization in ReLU networks.<n>We show that while flatness does imply generalization, the resulting rates of convergence necessarily deteriorate exponentially as the input dimension grows.
arXiv Detail & Related papers (2025-06-25T19:10:03Z) - Flatness After All? [6.977444416330261]
We argue that generalization could be assessed by measuring flatness using a soft rank measure of the Hessian.<n>For non-calibrated models, we connect a soft rank based flatness measure to the well-known Takeuchi Information Criterion.
arXiv Detail & Related papers (2025-06-21T20:33:36Z) - The Uncanny Valley: Exploring Adversarial Robustness from a Flatness Perspective [34.55229189445268]
Flatness of the loss surface not only correlates positively with generalization, but is also related to adversarial robustness.<n>In this paper, we empirically analyze the relation between adversarial examples and relative flatness with respect to the parameters of one layer.
arXiv Detail & Related papers (2024-05-27T08:10:46Z) - On the Dynamics Under the Unhinged Loss and Beyond [104.49565602940699]
We introduce the unhinged loss, a concise loss function, that offers more mathematical opportunities to analyze closed-form dynamics.
The unhinged loss allows for considering more practical techniques, such as time-vary learning rates and feature normalization.
arXiv Detail & Related papers (2023-12-13T02:11:07Z) - On the Robustness of Neural Collapse and the Neural Collapse of Robustness [6.227447957721122]
Neural Collapse refers to the curious phenomenon in the end of training of a neural network, where feature vectors and classification weights converge to a very simple geometrical arrangement (a simplex)
We study the stability properties of these simplices, and find that the simplex structure disappears under small adversarial attacks.
We identify novel properties of both robust and non-robust machine learning models, and show that earlier, unlike later layers maintain reliable simplices on perturbed data.
arXiv Detail & Related papers (2023-11-13T16:18:58Z) - An Unconstrained Layer-Peeled Perspective on Neural Collapse [20.75423143311858]
We introduce a surrogate model called the unconstrained layer-peeled model (ULPM)
We prove that gradient flow on this model converges to critical points of a minimum-norm separation problem exhibiting neural collapse in its global minimizer.
We show that our results also hold during the training of neural networks in real-world tasks when explicit regularization or weight decay is not used.
arXiv Detail & Related papers (2021-10-06T14:18:47Z) - On the (Un-)Avoidability of Adversarial Examples [4.822598110892847]
adversarial examples in deep learning models have caused substantial concern over their reliability.
We provide a framework for determining whether a model's label change under small perturbation is justified.
We prove that our adaptive data-augmentation maintains consistency of 1-nearest neighbor classification under deterministic labels.
arXiv Detail & Related papers (2021-06-24T21:35:25Z) - Relating Adversarially Robust Generalization to Flat Minima [138.59125287276194]
Adversarial training (AT) has become the de-facto standard to obtain models robust against adversarial examples.
We study the relationship between robust generalization and flatness of the robust loss landscape in weight space.
arXiv Detail & Related papers (2021-04-09T15:55:01Z) - Adversarial Robustness under Long-Tailed Distribution [93.50792075460336]
Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks.
In this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions.
We propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant and data re-balancing.
arXiv Detail & Related papers (2021-04-06T17:53:08Z) - Fundamental Limits and Tradeoffs in Invariant Representation Learning [99.2368462915979]
Many machine learning applications involve learning representations that achieve two competing goals.
Minimax game-theoretic formulation represents a fundamental tradeoff between accuracy and invariance.
We provide an information-theoretic analysis of this general and important problem under both classification and regression settings.
arXiv Detail & Related papers (2020-12-19T15:24:04Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.