The Familiarity Hypothesis: Explaining the Behavior of Deep Open Set
Methods
- URL: http://arxiv.org/abs/2203.02486v1
- Date: Fri, 4 Mar 2022 18:32:58 GMT
- Title: The Familiarity Hypothesis: Explaining the Behavior of Deep Open Set
Methods
- Authors: Thomas G. Dietterich, Alexander Guyer
- Abstract summary: Anomaly detection algorithms for feature-vector data identify anomalies as outliers, but outlier detection has not worked well in deep learning.
This paper proposes the Familiarity Hypothesis that these methods succeed because they are detecting the absence of familiar learned features rather than the presence of novelty.
The paper concludes with a discussion of whether familiarity detection is an inevitable consequence of representation learning.
- Score: 86.39044549664189
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In many object recognition applications, the set of possible categories is an
open set, and the deployed recognition system will encounter novel objects
belonging to categories unseen during training. Detecting such "novel category"
objects is usually formulated as an anomaly detection problem. Anomaly
detection algorithms for feature-vector data identify anomalies as outliers,
but outlier detection has not worked well in deep learning. Instead, methods
based on the computed logits of visual object classifiers give state-of-the-art
performance. This paper proposes the Familiarity Hypothesis that these methods
succeed because they are detecting the absence of familiar learned features
rather than the presence of novelty. The paper reviews evidence from the
literature and presents additional evidence from our own experiments that
provide strong support for this hypothesis. The paper concludes with a
discussion of whether familiarity detection is an inevitable consequence of
representation learning.
Related papers
- Managing the unknown: a survey on Open Set Recognition and tangential
areas [7.345136916791223]
Open Set Recognition models are capable of detecting unknown classes from samples arriving during the testing phase, while maintaining a good level of performance in the classification of samples belonging to known classes.
This review comprehensively overviews the recent literature related to Open Set Recognition, identifying common practices, limitations, and connections of this field with other machine learning research areas.
Our work also uncovers open problems and suggests several research directions that may motivate and articulate future efforts towards more safe Artificial Intelligence methods.
arXiv Detail & Related papers (2023-12-14T10:08:12Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Oracle Analysis of Representations for Deep Open Set Detection [32.450481640129645]
The problem of detecting a novel class at run time is known as Open Set Detection & is important for various real-world applications like medical application, autonomous driving, etc.
Open Set Detection within context of deep learning involves solving two problems: (i) Must map the input images into a latent representation that contains enough information to detect the outliers, and (ii) Must learn an anomaly scoring function that can extract this information from the latent representation to identify the anomalies.
arXiv Detail & Related papers (2022-09-22T23:54:42Z) - Novel Class Discovery without Forgetting [72.52222295216062]
We identify and formulate a new, pragmatic problem setting of NCDwF: Novel Class Discovery without Forgetting.
We propose a machine learning model to incrementally discover novel categories of instances from unlabeled data.
We introduce experimental protocols based on CIFAR-10, CIFAR-100 and ImageNet-1000 to measure the trade-off between knowledge retention and novel class discovery.
arXiv Detail & Related papers (2022-07-21T17:54:36Z) - Semantic Novelty Detection via Relational Reasoning [17.660958043781154]
We propose a novel representation learning paradigm based on relational reasoning.
Our experiments show that this knowledge is directly transferable to a wide range of scenarios.
It can be exploited as a plug-and-play module to convert closed-set recognition models into reliable open-set ones.
arXiv Detail & Related papers (2022-07-18T15:49:27Z) - Towards Intrinsic Common Discriminative Features Learning for Face
Forgery Detection using Adversarial Learning [59.548960057358435]
We propose a novel method which utilizes adversarial learning to eliminate the negative effect of different forgery methods and facial identities.
Our face forgery detection model learns to extract common discriminative features through eliminating the effect of forgery methods and facial identities.
arXiv Detail & Related papers (2022-07-08T09:23:59Z) - Uncertainty Aware Proposal Segmentation for Unknown Object Detection [13.249453757295083]
This paper proposes to exploit additional predictions of semantic segmentation models and quantifying its confidences.
We use object proposals generated by Region Proposal Network (RPN) and adapt distance aware uncertainty estimation of semantic segmentation.
The augmented object proposals are then used to train a classifier for known vs. unknown objects categories.
arXiv Detail & Related papers (2021-11-25T01:53:05Z) - Class Introspection: A Novel Technique for Detecting Unlabeled
Subclasses by Leveraging Classifier Explainability Methods [0.0]
latent structure is a crucial step in performing analysis of a dataset.
By leveraging instance explanation methods, an existing classifier can be extended to detect latent classes.
This paper also contains a pipeline for analyzing classifiers automatically, and a web application for interactively exploring the results from this technique.
arXiv Detail & Related papers (2021-07-04T14:58:29Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - Open Set Recognition with Conditional Probabilistic Generative Models [51.40872765917125]
We propose Conditional Probabilistic Generative Models (CPGM) for open set recognition.
CPGM can detect unknown samples but also classify known classes by forcing different latent features to approximate conditional Gaussian distributions.
Experiment results on multiple benchmark datasets reveal that the proposed method significantly outperforms the baselines.
arXiv Detail & Related papers (2020-08-12T06:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.