Fool Me Once: Robust Selective Segmentation via Out-of-Distribution
Detection with Contrastive Learning
- URL: http://arxiv.org/abs/2103.00869v1
- Date: Mon, 1 Mar 2021 09:38:40 GMT
- Title: Fool Me Once: Robust Selective Segmentation via Out-of-Distribution
Detection with Contrastive Learning
- Authors: David Williams, Matthew Gadd, Daniele De Martini and Paul Newman
- Abstract summary: We train a network to simultaneously perform segmentation and pixel-wise Out-of-Distribution (OoD) detection.
This is made possible by leveraging an OoD dataset with a novel contrastive objective and data augmentation scheme.
We show that by selectively segmenting scenes based on what is predicted as OoD, we can increase the segmentation accuracy by an IoU of 0.2 with respect to alternative techniques.
- Score: 27.705683228657175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we train a network to simultaneously perform segmentation and
pixel-wise Out-of-Distribution (OoD) detection, such that the segmentation of
unknown regions of scenes can be rejected. This is made possible by leveraging
an OoD dataset with a novel contrastive objective and data augmentation scheme.
By combining data including unknown classes in the training data, a more robust
feature representation can be learned with known classes represented distinctly
from those unknown. When presented with unknown classes or conditions, many
current approaches for segmentation frequently exhibit high confidence in their
inaccurate segmentations and cannot be trusted in many operational
environments. We validate our system on a real-world dataset of unusual driving
scenes, and show that by selectively segmenting scenes based on what is
predicted as OoD, we can increase the segmentation accuracy by an IoU of 0.2
with respect to alternative techniques.
Related papers
- VL4AD: Vision-Language Models Improve Pixel-wise Anomaly Detection [5.66050466694651]
We propose Vision-Language (VL) encoders into existing anomaly detectors to leverage the semantically broad VL pre-training for improved outlier awareness.
We also propose a new scoring function that enables data- and training-free outlier supervision via textual prompts.
The resulting VL4AD model achieves competitive performance on widely used benchmark datasets.
arXiv Detail & Related papers (2024-09-25T20:12:10Z) - Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - ElC-OIS: Ellipsoidal Clustering for Open-World Instance Segmentation on
LiDAR Data [13.978966783993146]
Open-world Instance (OIS) is a challenging task that aims to accurately segment every object instance appearing in the current observation.
This is important for safety-critical applications such as robust autonomous navigation.
We present a flexible and effective OIS framework for LiDAR point cloud that can accurately segment both known and unknown instances.
arXiv Detail & Related papers (2023-03-08T03:22:11Z) - Mitigating Representation Bias in Action Recognition: Algorithms and
Benchmarks [76.35271072704384]
Deep learning models perform poorly when applied to videos with rare scenes or objects.
We tackle this problem from two different angles: algorithm and dataset.
We show that the debiased representation can generalize better when transferred to other datasets and tasks.
arXiv Detail & Related papers (2022-09-20T00:30:35Z) - Segmenting Known Objects and Unseen Unknowns without Prior Knowledge [86.46204148650328]
holistic segmentation aims to identify and separate objects of unseen, unknown categories into instances without any prior knowledge about them.
We tackle this new problem with U3HS, which finds unknowns as highly uncertain regions and clusters their corresponding instance-aware embeddings into individual objects.
Experiments on public data from MS, Cityscapes, and Lost&Found demonstrate the effectiveness of U3HS.
arXiv Detail & Related papers (2022-09-12T16:59:36Z) - Towards Unsupervised Open World Semantic Segmentation [6.445605125467575]
We introduce a method where unknown objects are clustered based on visual similarity.
connected components of a predicted semantic segmentation are assessed by a segmentation quality estimate.
We demonstrate that, without access to ground truth and even with few data, a DNN's class space can be extended by a novel class.
arXiv Detail & Related papers (2022-01-04T10:29:34Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Video Class Agnostic Segmentation with Contrastive Learning for
Autonomous Driving [13.312978643938202]
We propose a novel auxiliary contrastive loss to learn the segmentation of known classes and unknown objects.
Unlike previous work in contrastive learning that samples the anchor, positive and negative examples on an image level, our contrastive learning method leverages pixel-wise semantic and temporal guidance.
We release a large-scale synthetic dataset for different autonomous driving scenarios that includes distinct and rare unknown objects.
arXiv Detail & Related papers (2021-05-07T23:07:06Z) - Uncertainty-based method for improving poorly labeled segmentation
datasets [0.0]
It is known that deep convolutional neural networks (DCNNs) can memorize even completely random labels.
We propose a framework to train binary segmentation DCNNs using sets of unreliable pixel-level annotations.
arXiv Detail & Related papers (2021-02-16T08:37:19Z) - Adversarial Knowledge Transfer from Unlabeled Data [62.97253639100014]
We present a novel Adversarial Knowledge Transfer framework for transferring knowledge from internet-scale unlabeled data to improve the performance of a classifier.
An important novel aspect of our method is that the unlabeled source data can be of different classes from those of the labeled target data, and there is no need to define a separate pretext task.
arXiv Detail & Related papers (2020-08-13T08:04:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.