Flexible Visual Recognition by Evidential Modeling of Confusion and
Ignorance
- URL: http://arxiv.org/abs/2309.07403v1
- Date: Thu, 14 Sep 2023 03:16:05 GMT
- Title: Flexible Visual Recognition by Evidential Modeling of Confusion and
Ignorance
- Authors: Lei Fan, Bo Liu, Haoxiang Li, Ying Wu, Gang Hua
- Abstract summary: In real-world scenarios, typical visual recognition systems could fail under two major causes, i.e., the misclassification between known classes and the excusable misbehavior on unknown-class images.
To tackle these deficiencies, flexible visual recognition should dynamically predict multiple classes when they are unconfident between choices and reject making predictions when the input is entirely out of the training distribution.
In this paper, we propose to model these two sources of uncertainty explicitly with the theory of Subjective Logic.
- Score: 25.675733490127964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real-world scenarios, typical visual recognition systems could fail under
two major causes, i.e., the misclassification between known classes and the
excusable misbehavior on unknown-class images. To tackle these deficiencies,
flexible visual recognition should dynamically predict multiple classes when
they are unconfident between choices and reject making predictions when the
input is entirely out of the training distribution. Two challenges emerge along
with this novel task. First, prediction uncertainty should be separately
quantified as confusion depicting inter-class uncertainties and ignorance
identifying out-of-distribution samples. Second, both confusion and ignorance
should be comparable between samples to enable effective decision-making. In
this paper, we propose to model these two sources of uncertainty explicitly
with the theory of Subjective Logic. Regarding recognition as an
evidence-collecting process, confusion is then defined as conflicting evidence,
while ignorance is the absence of evidence. By predicting Dirichlet
concentration parameters for singletons, comprehensive subjective opinions,
including confusion and ignorance, could be achieved via further evidence
combinations. Through a series of experiments on synthetic data analysis,
visual recognition, and open-set detection, we demonstrate the effectiveness of
our methods in quantifying two sources of uncertainties and dealing with
flexible recognition.
Related papers
- Causal Discovery in Linear Models with Unobserved Variables and Measurement Error [26.72594853233639]
The presence of unobserved common causes and the presence of measurement error are two of the most limiting challenges in the task of causal structure learning.
We study the problem of causal discovery in systems where these two challenges can be present simultaneously.
arXiv Detail & Related papers (2024-07-28T08:26:56Z) - Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness [106.52630978891054]
We present a taxonomy of uncertainty specific to vision-language AI systems.
We also introduce a new metric confidence-weighted accuracy, that is well correlated with both accuracy and calibration error.
arXiv Detail & Related papers (2024-07-02T04:23:54Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Uncertain Facial Expression Recognition via Multi-task Assisted
Correction [43.02119884581332]
We propose a novel method of multi-task assisted correction in addressing uncertain facial expression recognition called MTAC.
Specifically, a confidence estimation block and a weighted regularization module are applied to highlight solid samples and suppress uncertain samples in every batch.
Experiments on RAF-DB, AffectNet, and AffWild2 datasets demonstrate that the MTAC obtains substantial improvements over baselines when facing synthetic and real uncertainties.
arXiv Detail & Related papers (2022-12-14T10:28:08Z) - Uncertain Evidence in Probabilistic Models and Stochastic Simulators [80.40110074847527]
We consider the problem of performing Bayesian inference in probabilistic models where observations are accompanied by uncertainty, referred to as uncertain evidence'
We explore how to interpret uncertain evidence, and by extension the importance of proper interpretation as it pertains to inference about latent variables.
We devise concrete guidelines on how to account for uncertain evidence and we provide new insights, particularly regarding consistency.
arXiv Detail & Related papers (2022-10-21T20:32:59Z) - Resolving label uncertainty with implicit posterior models [71.62113762278963]
We propose a method for jointly inferring labels across a collection of data samples.
By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs.
arXiv Detail & Related papers (2022-02-28T18:09:44Z) - A Tale Of Two Long Tails [4.970364068620608]
We identify examples the model is uncertain about and characterize the source of said uncertainty.
We investigate whether the rate of learning in the presence of additional information differs between atypical and noisy examples.
Our results show that well-designed interventions over the course of training can be an effective way to characterize and distinguish between different sources of uncertainty.
arXiv Detail & Related papers (2021-07-27T22:49:59Z) - Uncertainty-Aware Reliable Text Classification [21.517852608625127]
Deep neural networks have significantly contributed to the success in predictive accuracy for classification tasks.
They tend to make over-confident predictions in real-world settings, where domain shifting and out-of-distribution examples exist.
We propose an inexpensive framework that adopts both auxiliary outliers and pseudo off-manifold samples to train the model with prior knowledge of a certain class.
arXiv Detail & Related papers (2021-07-15T04:39:55Z) - DISSECT: Disentangled Simultaneous Explanations via Concept Traversals [33.65478845353047]
DISSECT is a novel approach to explaining deep learning model inferences.
By training a generative model from a classifier's signal, DISSECT offers a way to discover a classifier's inherent "notion" of distinct concepts.
We show that DISSECT produces CTs that disentangle several concepts and are coupled to its reasoning due to joint training.
arXiv Detail & Related papers (2021-05-31T17:11:56Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - End-to-End Models for the Analysis of System 1 and System 2 Interactions
based on Eye-Tracking Data [99.00520068425759]
We propose a computational method, within a modified visual version of the well-known Stroop test, for the identification of different tasks and potential conflicts events.
A statistical analysis shows that the selected variables can characterize the variation of attentive load within different scenarios.
We show that Machine Learning techniques allow to distinguish between different tasks with a good classification accuracy.
arXiv Detail & Related papers (2020-02-03T17:46:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.