OoDIS: Anomaly Instance Segmentation and Detection Benchmark
- URL: http://arxiv.org/abs/2406.11835v2
- Date: Thu, 10 Apr 2025 13:11:08 GMT
- Title: OoDIS: Anomaly Instance Segmentation and Detection Benchmark
- Authors: Alexey Nekrasov, Rui Zhou, Miriam Ackermann, Alexander Hermans, Bastian Leibe, Matthias Rottmann,
- Abstract summary: This work extends some commonly used anomaly segmentation benchmarks to include the instance segmentation and object detection tasks.<n>Our evaluation of anomaly segmentation and object detection methods shows that both of these challenges remain unsolved problems.
- Score: 57.89836988990543
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safe navigation of self-driving cars and robots requires a precise understanding of their environment. Training data for perception systems cannot cover the wide variety of objects that may appear during deployment. Thus, reliable identification of unknown objects, such as wild animals and untypical obstacles, is critical due to their potential to cause serious accidents. Significant progress in semantic segmentation of anomalies has been facilitated by the availability of out-of-distribution (OOD) benchmarks. However, a comprehensive understanding of scene dynamics requires the segmentation of individual objects, and thus the segmentation of instances is essential. Development in this area has been lagging, largely due to the lack of dedicated benchmarks. The situation is similar in object detection. While there is interest in detecting and potentially tracking every anomalous object, the availability of dedicated benchmarks is clearly limited. To address this gap, this work extends some commonly used anomaly segmentation benchmarks to include the instance segmentation and object detection tasks. Our evaluation of anomaly instance segmentation and object detection methods shows that both of these challenges remain unsolved problems. We provide a competition and benchmark website under https://vision.rwth-aachen.de/oodis
Related papers
- Segmenting Objectiveness and Task-awareness Unknown Region for Autonomous Driving [46.70405993442064]
We propose a novel framework termed Segmenting Objectiveness and Task-Awareness (SOTA) for autonomous driving scenes.
SOTA enhances the segmentation of objectiveness through a Semantic Fusion Block (SFB) and filters anomalies irrelevant to road navigation tasks.
arXiv Detail & Related papers (2025-04-27T10:08:54Z) - Robotic Scene Segmentation with Memory Network for Runtime Surgical
Context Inference [8.600278838838163]
Space Time Correspondence Network (STCN) is a memory network that performs binary segmentation and minimizes the effects of class imbalance.
We show that STCN achieves superior segmentation performance for objects that are difficult to segment, such as needle and thread.
We also demonstrate that segmentation and context inference can be performed at runtime without compromising performance.
arXiv Detail & Related papers (2023-08-24T13:44:55Z) - UGainS: Uncertainty Guided Anomaly Instance Segmentation [80.12253291709673]
A single unexpected object on the road can cause an accident or lead to injuries.
Current approaches tackle anomaly segmentation by assigning an anomaly score to each pixel.
We propose an approach that produces accurate anomaly instance masks.
arXiv Detail & Related papers (2023-08-03T20:55:37Z) - Towards Building Self-Aware Object Detectors via Reliable Uncertainty
Quantification and Calibration [17.461451218469062]
In this work, we introduce the Self-Aware Object Detection (SAOD) task.
The SAOD task respects and adheres to the challenges that object detectors face in safety-critical environments such as autonomous driving.
We extensively use our framework, which introduces novel metrics and large scale test datasets, to test numerous object detectors.
arXiv Detail & Related papers (2023-07-03T11:16:39Z) - A Threefold Review on Deep Semantic Segmentation: Efficiency-oriented,
Temporal and Depth-aware design [77.34726150561087]
We conduct a survey on the most relevant and recent advances in Deep Semantic in the context of vision for autonomous vehicles.
Our main objective is to provide a comprehensive discussion on the main methods, advantages, limitations, results and challenges faced from each perspective.
arXiv Detail & Related papers (2023-03-08T01:29:55Z) - A Survey on Label-efficient Deep Segmentation: Bridging the Gap between
Weak Supervision and Dense Prediction [115.9169213834476]
This paper offers a comprehensive review on label-efficient segmentation methods.
We first develop a taxonomy to organize these methods according to the supervision provided by different types of weak labels.
Next, we summarize the existing label-efficient segmentation methods from a unified perspective.
arXiv Detail & Related papers (2022-07-04T06:21:01Z) - FSNet: A Failure Detection Framework for Semantic Segmentation [9.453250122818808]
We propose a failure detection framework to identify pixel-level misclassification.
We do so by exploiting internal features of the segmentation model and training it simultaneously with a failure detection network.
We evaluate the proposed approach against state-of-the-art methods and achieve 12.30%, 9.46%, and 9.65% performance improvement.
arXiv Detail & Related papers (2021-08-19T15:26:52Z) - Are standard Object Segmentation models sufficient for Learning
Affordance Segmentation? [3.845877724862319]
We show that applying the out-of-the-box Mask R-CNN to the problem of affordances segmentation outperforms the current state-of-the-art.
We argue that better benchmarks for affordance learning should include action capacities.
arXiv Detail & Related papers (2021-07-05T15:34:20Z) - Unsupervised Domain Adaption of Object Detectors: A Survey [87.08473838767235]
Recent advances in deep learning have led to the development of accurate and efficient models for various computer vision applications.
Learning highly accurate models relies on the availability of datasets with a large number of annotated images.
Due to this, model performance drops drastically when evaluated on label-scarce datasets having visually distinct images.
arXiv Detail & Related papers (2021-05-27T23:34:06Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation [111.61261419566908]
Deep neural networks (DNNs) are usually trained on a closed set of semantic classes.
They are ill-equipped to handle previously-unseen objects.
detecting and localizing such objects is crucial for safety-critical applications such as perception for automated driving.
arXiv Detail & Related papers (2021-04-30T07:58:19Z) - Target-Aware Object Discovery and Association for Unsupervised Video
Multi-Object Segmentation [79.6596425920849]
This paper addresses the task of unsupervised video multi-object segmentation.
We introduce a novel approach for more accurate and efficient unseen-temporal segmentation.
We evaluate the proposed approach on DAVIS$_17$ and YouTube-VIS, and the results demonstrate that it outperforms state-of-the-art methods both in segmentation accuracy and inference speed.
arXiv Detail & Related papers (2021-04-10T14:39:44Z) - Slender Object Detection: Diagnoses and Improvements [74.40792217534]
In this paper, we are concerned with the detection of a particular type of objects with extreme aspect ratios, namely textbfslender objects.
For a classical object detection method, a drastic drop of $18.9%$ mAP on COCO is observed, if solely evaluated on slender objects.
arXiv Detail & Related papers (2020-11-17T09:39:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.