Robust Machine Learning via Privacy/Rate-Distortion Theory
- URL: http://arxiv.org/abs/2007.11693v2
- Date: Tue, 18 May 2021 21:13:24 GMT
- Title: Robust Machine Learning via Privacy/Rate-Distortion Theory
- Authors: Ye Wang, Shuchin Aeron, Adnan Siraj Rakin, Toshiaki Koike-Akino,
Pierre Moulin
- Abstract summary: Robust machine learning formulations have emerged to address the prevalent vulnerability of deep neural networks to adversarial examples.
Our work draws the connection between optimal robust learning and the privacy-utility tradeoff problem, which is a generalization of the rate-distortion problem.
This information-theoretic perspective sheds light on the fundamental tradeoff between robustness and clean data performance.
- Score: 34.28921458311185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust machine learning formulations have emerged to address the prevalent
vulnerability of deep neural networks to adversarial examples. Our work draws
the connection between optimal robust learning and the privacy-utility tradeoff
problem, which is a generalization of the rate-distortion problem. The saddle
point of the game between a robust classifier and an adversarial perturbation
can be found via the solution of a maximum conditional entropy problem. This
information-theoretic perspective sheds light on the fundamental tradeoff
between robustness and clean data performance, which ultimately arises from the
geometric structure of the underlying data distribution and perturbation
constraints.
Related papers
- The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - To be or not to be stable, that is the question: understanding neural
networks for inverse problems [0.0]
In this paper, we theoretically analyze the trade-off between stability and accuracy of neural networks.
We propose different supervised and unsupervised solutions to increase the network stability and maintain a good accuracy.
arXiv Detail & Related papers (2022-11-24T16:16:40Z) - Robustness and invariance properties of image classifiers [8.970032486260695]
Deep neural networks have achieved impressive results in many image classification tasks.
Deep networks are not robust to a large variety of semantic-preserving image modifications.
The poor robustness of image classifiers to small data distribution shifts raises serious concerns regarding their trustworthiness.
arXiv Detail & Related papers (2022-08-30T11:00:59Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Derivation of Information-Theoretically Optimal Adversarial Attacks with
Applications to Robust Machine Learning [11.206758778146288]
We consider the theoretical problem of designing an optimal adversarial attack on a decision system.
We present derivations of the optimal adversarial attacks for discrete and continuous signals of interest.
We show that it is much harder to achieve adversarial attacks for minimizing mutual information when multiple redundant copies of the input signal are available.
arXiv Detail & Related papers (2020-07-28T07:45:25Z) - Unique properties of adversarially trained linear classifiers on
Gaussian data [13.37805637358556]
adversarial learning research community has made remarkable progress in understanding root causes of adversarial perturbations.
It is common to develop adversarially robust learning theory on simple problems, in the hope that insights will transfer to real world datasets'
In particular, we show with a linear classifier, it is always possible to solve a binary classification problem on Gaussian data under arbitrary levels of adversarial corruption.
arXiv Detail & Related papers (2020-06-06T14:06:38Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.