Adversarial Robustness of Supervised Sparse Coding
- URL: http://arxiv.org/abs/2010.12088v2
- Date: Mon, 4 Jan 2021 15:10:47 GMT
- Title: Adversarial Robustness of Supervised Sparse Coding
- Authors: Jeremias Sulam, Ramchandran Muthukumar, Raman Arora
- Abstract summary: We consider a model that involves learning a representation while at the same time giving a precise generalization bound and a robustness certificate.
We focus on the hypothesis class obtained by combining a sparsity-promoting encoder coupled with a linear encoder.
We provide a robustness certificate for end-to-end classification.
- Score: 34.94566482399662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several recent results provide theoretical insights into the phenomena of
adversarial examples. Existing results, however, are often limited due to a gap
between the simplicity of the models studied and the complexity of those
deployed in practice. In this work, we strike a better balance by considering a
model that involves learning a representation while at the same time giving a
precise generalization bound and a robustness certificate. We focus on the
hypothesis class obtained by combining a sparsity-promoting encoder coupled
with a linear classifier, and show an interesting interplay between the
expressivity and stability of the (supervised) representation map and a notion
of margin in the feature space. We bound the robust risk (to $\ell_2$-bounded
perturbations) of hypotheses parameterized by dictionaries that achieve a mild
encoder gap on training data. Furthermore, we provide a robustness certificate
for end-to-end classification. We demonstrate the applicability of our analysis
by computing certified accuracy on real data, and compare with other
alternatives for certified robustness.
Related papers
- On the KL-Divergence-based Robust Satisficing Model [2.425685918104288]
robustness satisficing framework has attracted increasing attention from academia.
We present analytical interpretations, diverse performance guarantees, efficient and stable numerical methods, convergence analysis, and an extension tailored for hierarchical data structures.
We demonstrate the superior performance of our model compared to state-of-the-art benchmarks.
arXiv Detail & Related papers (2024-08-17T10:05:05Z) - Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - Certified $\ell_2$ Attribution Robustness via Uniformly Smoothed Attributions [20.487079380753876]
We propose a uniform smoothing technique that augments the vanilla attributions by noises uniformly sampled from a certain space.
It is proved that, for all perturbations within the attack region, the cosine similarity between uniformly smoothed attribution of perturbed sample and the unperturbed sample is guaranteed to be lower bounded.
arXiv Detail & Related papers (2024-05-10T09:56:02Z) - Mapping the Multiverse of Latent Representations [17.2089620240192]
PRESTO is a principled framework for mapping the multiverse of machine-learning models that rely on latent representations.
Our framework uses persistent homology to characterize the latent spaces arising from different combinations of diverse machine-learning methods.
arXiv Detail & Related papers (2024-02-02T15:54:53Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Explicit Tradeoffs between Adversarial and Natural Distributional
Robustness [48.44639585732391]
In practice, models need to enjoy both types of robustness to ensure reliability.
In this work, we show that in fact, explicit tradeoffs exist between adversarial and natural distributional robustness.
arXiv Detail & Related papers (2022-09-15T19:58:01Z) - Certified Distributional Robustness on Smoothed Classifiers [27.006844966157317]
We propose the worst-case adversarial loss over input distributions as a robustness certificate.
By exploiting duality and the smoothness property, we provide an easy-to-compute upper bound as a surrogate for the certificate.
arXiv Detail & Related papers (2020-10-21T13:22:25Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.