A Bayesian Approach to OOD Robustness in Image Classification
- URL: http://arxiv.org/abs/2403.07277v1
- Date: Tue, 12 Mar 2024 03:15:08 GMT
- Title: A Bayesian Approach to OOD Robustness in Image Classification
- Authors: Prakhar Kaushik and Adam Kortylewski and Alan Yuille
- Abstract summary: We introduce a novel Bayesian approach to OOD robustness for object classification.
We exploit the fact that CompNets contain a generative head defined over feature vectors represented by von Mises-Fisher (vMF) kernels.
This enables us to learn a transitional dictionary of vMF kernels that are intermediate between the source and target domains.
- Score: 20.104489420303306
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An important and unsolved problem in computer vision is to ensure that the
algorithms are robust to changes in image domains. We address this problem in
the scenario where we have access to images from the target domains but no
annotations. Motivated by the challenges of the OOD-CV benchmark where we
encounter real world Out-of-Domain (OOD) nuisances and occlusion, we introduce
a novel Bayesian approach to OOD robustness for object classification. Our work
extends Compositional Neural Networks (CompNets), which have been shown to be
robust to occlusion but degrade badly when tested on OOD data. We exploit the
fact that CompNets contain a generative head defined over feature vectors
represented by von Mises-Fisher (vMF) kernels, which correspond roughly to
object parts, and can be learned without supervision. We obverse that some vMF
kernels are similar between different domains, while others are not. This
enables us to learn a transitional dictionary of vMF kernels that are
intermediate between the source and target domains and train the generative
model on this dictionary using the annotations on the source domain, followed
by iterative refinement. This approach, termed Unsupervised Generative
Transition (UGT), performs very well in OOD scenarios even when occlusion is
present. UGT is evaluated on different OOD benchmarks including the OOD-CV
dataset, several popular datasets (e.g., ImageNet-C [9]), artificial image
corruptions (including adding occluders), and synthetic-to-real domain
transfer, and does well in all scenarios outperforming SOTA alternatives (e.g.
up to 10% top-1 accuracy on Occluded OOD-CV dataset).
Related papers
- What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - ATTA: Anomaly-aware Test-Time Adaptation for Out-of-Distribution
Detection in Segmentation [22.084967085509387]
We propose a dual-level OOD detection framework to handle domain shift and semantic shift jointly.
The first level distinguishes whether domain shift exists in the image by leveraging global low-level features.
The second level identifies pixels with semantic shift by utilizing dense high-level feature maps.
arXiv Detail & Related papers (2023-09-12T06:49:56Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - Building One-class Detector for Anything: Open-vocabulary Zero-shot OOD
Detection Using Text-image Models [23.302018871162186]
We propose a novel one-class open-set OOD detector that leverages text-image pre-trained models in a zero-shot fashion.
Our approach is designed to detect anything not in-domain and offers the flexibility to detect a wide variety of OOD.
Our method shows superior performance over previous methods on all benchmarks.
arXiv Detail & Related papers (2023-05-26T18:58:56Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Semantically Coherent Out-of-Distribution Detection [26.224146828317277]
Current out-of-distribution (OOD) detection benchmarks are commonly built by defining one dataset as in-distribution (ID) and all others as OOD.
We re-design the benchmarks and propose the semantically coherent out-of-distribution detection (SC-OOD)
Our approach achieves state-of-the-art performance on SC-OOD benchmarks.
arXiv Detail & Related papers (2021-08-26T17:53:32Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - OODformer: Out-Of-Distribution Detection Transformer [15.17006322500865]
In real-world safety-critical applications, it is important to be aware if a new data point is OOD.
This paper proposes a first-of-its-kind OOD detection architecture named OODformer.
arXiv Detail & Related papers (2021-07-19T15:46:38Z) - Probing Predictions on OOD Images via Nearest Categories [97.055916832257]
We study out-of-distribution (OOD) prediction behavior of neural networks when they classify images from unseen classes or corrupted images.
We introduce a new measure, nearest category generalization (NCG), where we compute the fraction of OOD inputs that are classified with the same label as their nearest neighbor in the training set.
We find that robust networks have consistently higher NCG accuracy than natural training, even when the OOD data is much farther away than the robustness radius.
arXiv Detail & Related papers (2020-11-17T07:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.