On Continuity of Robust and Accurate Classifiers
- URL: http://arxiv.org/abs/2309.17048v1
- Date: Fri, 29 Sep 2023 08:14:25 GMT
- Title: On Continuity of Robust and Accurate Classifiers
- Authors: Ramin Barati, Reza Safabakhsh, Mohammad Rahmati
- Abstract summary: It has been shown that adversarial training can improve the robustness of the hypothesis.
It has been suggested that robustness and accuracy of a hypothesis are at odds with each other.
In this paper, we put forth the alternative proposal that it is the continuity of a hypothesis that is incompatible with its robustness and accuracy.
- Score: 3.8673630752805437
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The reliability of a learning model is key to the successful deployment of
machine learning in various applications. Creating a robust model, particularly
one unaffected by adversarial attacks, requires a comprehensive understanding
of the adversarial examples phenomenon. However, it is difficult to describe
the phenomenon due to the complicated nature of the problems in machine
learning. It has been shown that adversarial training can improve the
robustness of the hypothesis. However, this improvement comes at the cost of
decreased performance on natural samples. Hence, it has been suggested that
robustness and accuracy of a hypothesis are at odds with each other. In this
paper, we put forth the alternative proposal that it is the continuity of a
hypothesis that is incompatible with its robustness and accuracy. In other
words, a continuous function cannot effectively learn the optimal robust
hypothesis. To this end, we will introduce a framework for a rigorous study of
harmonic and holomorphic hypothesis in learning theory terms and provide
empirical evidence that continuous hypotheses does not perform as well as
discontinuous hypotheses in some common machine learning tasks. From a
practical point of view, our results suggests that a robust and accurate
learning rule would train different continuous hypotheses for different regions
of the domain. From a theoretical perspective, our analysis explains the
adversarial examples phenomenon as a conflict between the continuity of a
sequence of functions and its uniform convergence to a discontinuous function.
Related papers
- Towards Characterizing Domain Counterfactuals For Invertible Latent Causal Models [15.817239008727789]
In this work, we analyze a specific type of causal query called domain counterfactuals, which hypothesizes what a sample would have looked like if it had been generated in a different domain.
We show that recovering the latent Structural Causal Model (SCM) is unnecessary for estimating domain counterfactuals.
We also develop a theoretically grounded practical algorithm that simplifies the modeling process to generative model estimation.
arXiv Detail & Related papers (2023-06-20T04:19:06Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - The role of prior information and computational power in Machine
Learning [0.0]
We discuss how prior information and computational power can be employed to solve a learning problem.
We argue that employing high computational power has the advantage of a higher performance.
arXiv Detail & Related papers (2022-10-31T20:39:53Z) - Uncertain Evidence in Probabilistic Models and Stochastic Simulators [80.40110074847527]
We consider the problem of performing Bayesian inference in probabilistic models where observations are accompanied by uncertainty, referred to as uncertain evidence'
We explore how to interpret uncertain evidence, and by extension the importance of proper interpretation as it pertains to inference about latent variables.
We devise concrete guidelines on how to account for uncertain evidence and we provide new insights, particularly regarding consistency.
arXiv Detail & Related papers (2022-10-21T20:32:59Z) - Robust Transferable Feature Extractors: Learning to Defend Pre-Trained
Networks Against White Box Adversaries [69.53730499849023]
We show that adversarial examples can be successfully transferred to another independently trained model to induce prediction errors.
We propose a deep learning-based pre-processing mechanism, which we refer to as a robust transferable feature extractor (RTFE)
arXiv Detail & Related papers (2022-09-14T21:09:34Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Feedback in Imitation Learning: Confusion on Causality and Covariate
Shift [12.93527098342393]
We argue that conditioning policies on previous actions leads to a dramatic divergence between "held out" error and performance of the learner in situ.
We analyze existing benchmarks used to test imitation learning approaches.
We find, in a surprising contrast with previous literature, that naive behavioral cloning provides excellent results.
arXiv Detail & Related papers (2021-02-04T20:18:56Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Modal Uncertainty Estimation via Discrete Latent Representation [4.246061945756033]
We introduce a deep learning framework that learns the one-to-many mappings between the inputs and outputs, together with faithful uncertainty measures.
Our framework demonstrates significantly more accurate uncertainty estimation than the current state-of-the-art methods.
arXiv Detail & Related papers (2020-07-25T05:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.