Segment Every Out-of-Distribution Object
- URL: http://arxiv.org/abs/2311.16516v4
- Date: Thu, 28 Mar 2024 15:15:04 GMT
- Title: Segment Every Out-of-Distribution Object
- Authors: Wenjie Zhao, Jia Li, Xin Dong, Yu Xiang, Yunhui Guo,
- Abstract summary: This paper introduces a method to convert anomaly textbfScore textbfTo segmentation textbfMask, called S2M, a simple and effective framework for OoD detection in semantic segmentation.
By transforming anomaly scores into prompts for a promptable segmentation model, S2M eliminates the need for threshold selection.
- Score: 24.495734304922244
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Semantic segmentation models, while effective for in-distribution categories, face challenges in real-world deployment due to encountering out-of-distribution (OoD) objects. Detecting these OoD objects is crucial for safety-critical applications. Existing methods rely on anomaly scores, but choosing a suitable threshold for generating masks presents difficulties and can lead to fragmentation and inaccuracy. This paper introduces a method to convert anomaly \textbf{S}core \textbf{T}o segmentation \textbf{M}ask, called S2M, a simple and effective framework for OoD detection in semantic segmentation. Unlike assigning anomaly scores to pixels, S2M directly segments the entire OoD object. By transforming anomaly scores into prompts for a promptable segmentation model, S2M eliminates the need for threshold selection. Extensive experiments demonstrate that S2M outperforms the state-of-the-art by approximately 20% in IoU and 40% in mean F1 score, on average, across various benchmarks including Fishyscapes, Segment-Me-If-You-Can, and RoadAnomaly datasets.
Related papers
- Bridge the Points: Graph-based Few-shot Segment Anything Semantically [79.1519244940518]
Recent advancements in pre-training techniques have enhanced the capabilities of vision foundation models.
Recent studies extend the SAM to Few-shot Semantic segmentation (FSS)
We propose a simple yet effective approach based on graph analysis.
arXiv Detail & Related papers (2024-10-09T15:02:28Z) - Adapting Segment Anything Model for Unseen Object Instance Segmentation [70.60171342436092]
Unseen Object Instance (UOIS) is crucial for autonomous robots operating in unstructured environments.
We propose UOIS-SAM, a data-efficient solution for the UOIS task.
UOIS-SAM integrates two key components: (i) a Heatmap-based Prompt Generator (HPG) to generate class-agnostic point prompts with precise foreground prediction, and (ii) a Hierarchical Discrimination Network (HDNet) that adapts SAM's mask decoder.
arXiv Detail & Related papers (2024-09-23T19:05:50Z) - Advancing SEM Based Nano-Scale Defect Analysis in Semiconductor Manufacturing for Advanced IC Nodes [0.10555513406636088]
This framework consists of two modules: (a) a defect detection module, followed by (b) a defect segmentation module.
BoxSnake facilitates box-supervised instance segmentation of nano-scale defects, supported by the former module.
We have evaluated the performance of our ADCDS framework using two distinct process datasets from real wafers.
arXiv Detail & Related papers (2024-09-06T14:35:04Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - UGainS: Uncertainty Guided Anomaly Instance Segmentation [80.12253291709673]
A single unexpected object on the road can cause an accident or lead to injuries.
Current approaches tackle anomaly segmentation by assigning an anomaly score to each pixel.
We propose an approach that produces accurate anomaly instance masks.
arXiv Detail & Related papers (2023-08-03T20:55:37Z) - The Second-place Solution for CVPR VISION 23 Challenge Track 1 -- Data
Effificient Defect Detection [3.4853769431047907]
The Vision Challenge Track 1 for Data-Effificient Defect Detection requires competitors to instance segment 14 industrial inspection datasets in a data-defificient setting.
This report introduces the technical details of the team Aoi-overfifitting-Team for this challenge.
arXiv Detail & Related papers (2023-06-25T03:37:02Z) - AcroFOD: An Adaptive Method for Cross-domain Few-shot Object Detection [59.10314662986463]
Cross-domain few-shot object detection aims to adapt object detectors in the target domain with a few annotated target data.
The proposed method achieves state-of-the-art performance on multiple benchmarks.
arXiv Detail & Related papers (2022-09-22T10:23:40Z) - Threshold-adaptive Unsupervised Focal Loss for Domain Adaptation of
Semantic Segmentation [25.626882426111198]
Unsupervised domain adaptation (UDA) for semantic segmentation has recently gained increasing research attention.
In this paper, we propose a novel two-stage entropy-based UDA method for semantic segmentation.
Our method achieves state-of-the-art 58.4% and 59.6% mIoUs on SYNTHIA-to-Cityscapes and GTA5-to-Cityscapes using DeepLabV2 and competitive performance using the lightweight BiSeNet.
arXiv Detail & Related papers (2022-08-23T03:48:48Z) - SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation [111.61261419566908]
Deep neural networks (DNNs) are usually trained on a closed set of semantic classes.
They are ill-equipped to handle previously-unseen objects.
detecting and localizing such objects is crucial for safety-critical applications such as perception for automated driving.
arXiv Detail & Related papers (2021-04-30T07:58:19Z) - Entropy Maximization and Meta Classification for Out-Of-Distribution
Detection in Semantic Segmentation [7.305019142196585]
"Out-of-distribution" (OoD) samples are crucial for many applications such as automated driving.
A natural baseline approach to OoD detection is to threshold on the pixel-wise softmax entropy.
We present a two-step procedure that significantly improves that approach.
arXiv Detail & Related papers (2020-12-09T11:01:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.