SAUGE: Taming SAM for Uncertainty-Aligned Multi-Granularity Edge Detection
- URL: http://arxiv.org/abs/2412.12892v1
- Date: Tue, 17 Dec 2024 13:18:41 GMT
- Title: SAUGE: Taming SAM for Uncertainty-Aligned Multi-Granularity Edge Detection
- Authors: Xing Liufu, Chaolei Tan, Xiaotong Lin, Yonggang Qi, Jinxuan Li, Jian-Fang Hu,
- Abstract summary: We unveil that the segment anything model (SAM) provides strong prior knowledge to model the uncertainty in edge labels.
Our model uniquely demonstrates strong generalizability for cross-dataset edge detection.
- Score: 8.651908243317301
- License:
- Abstract: Edge labels are typically at various granularity levels owing to the varying preferences of annotators, thus handling the subjectivity of per-pixel labels has been a focal point for edge detection. Previous methods often employ a simple voting strategy to diminish such label uncertainty or impose a strong assumption of labels with a pre-defined distribution, e.g., Gaussian. In this work, we unveil that the segment anything model (SAM) provides strong prior knowledge to model the uncertainty in edge labels. Our key insight is that the intermediate SAM features inherently correspond to object edges at various granularities, which reflects different edge options due to uncertainty. Therefore, we attempt to align uncertainty with granularity by regressing intermediate SAM features from different layers to object edges at multi-granularity levels. In doing so, the model can fully and explicitly explore diverse ``uncertainties'' in a data-driven fashion. Specifically, we inject a lightweight module (~ 1.5% additional parameters) into the frozen SAM to progressively fuse and adapt its intermediate features to estimate edges from coarse to fine. It is crucial to normalize the granularity level of human edge labels to match their innate uncertainty. For this, we simply perform linear blending to the real edge labels at hand to create pseudo labels with varying granularities. Consequently, our uncertainty-aligned edge detector can flexibly produce edges at any desired granularity (including an optimal one). Thanks to SAM, our model uniquely demonstrates strong generalizability for cross-dataset edge detection. Extensive experimental results on BSDS500, Muticue and NYUDv2 validate our model's superiority.
Related papers
- Generative Edge Detection with Stable Diffusion [52.870631376660924]
Edge detection is typically viewed as a pixel-level classification problem mainly addressed by discriminative methods.
We propose a novel approach, named Generative Edge Detector (GED), by fully utilizing the potential of the pre-trained stable diffusion model.
We conduct extensive experiments on multiple datasets and achieve competitive performance.
arXiv Detail & Related papers (2024-10-04T01:52:23Z) - SAM-Driven Weakly Supervised Nodule Segmentation with Uncertainty-Aware Cross Teaching [13.5553526185399]
Automated nodule segmentation is essential for computer-assisted diagnosis in ultrasound images.
Recently, segmentation foundation models like SAM have shown impressive generalizability on natural images.
In this work, we devise a novel weakly supervised framework that effectively utilizes the segmentation foundation model to generate pseudo-labels.
arXiv Detail & Related papers (2024-07-18T14:27:54Z) - Multi-clue Consistency Learning to Bridge Gaps Between General and Oriented Object in Semi-supervised Detection [26.486535389258965]
We experimentally find three gaps between general and oriented object detection in semi-supervised learning.
We propose a Multi-clue Consistency Learning (MCL) framework to bridge these gaps.
Our proposed MCL can achieve state-of-the-art performance in the semi-supervised oriented object detection task.
arXiv Detail & Related papers (2024-07-08T13:14:25Z) - SuperEdge: Towards a Generalization Model for Self-Supervised Edge
Detection [2.912976132828368]
State-of-the-art pixel-wise annotations are labor-intensive and subject to inconsistencies when acquired manually.
We propose a novel self-supervised approach for edge detection that employs a multi-level, multi-homography technique to transfer annotations from synthetic to real-world datasets.
Our method eliminates the dependency on manual annotated edge labels, thereby enhancing its generalizability across diverse datasets.
arXiv Detail & Related papers (2024-01-04T15:21:53Z) - Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning [59.44422468242455]
We propose a novel method dubbed ShrinkMatch to learn uncertain samples.
For each uncertain sample, it adaptively seeks a shrunk class space, which merely contains the original top-1 class.
We then impose a consistency regularization between a pair of strongly and weakly augmented samples in the shrunk space to strive for discriminative representations.
arXiv Detail & Related papers (2023-08-13T14:05:24Z) - The Treasure Beneath Multiple Annotations: An Uncertainty-aware Edge
Detector [70.43599299422813]
Existing methods fuse multiple annotations using a simple voting process, ignoring the inherent ambiguity of edges and labeling bias of annotators.
We propose a novel uncertainty-aware edge detector (UAED), which employs uncertainty to investigate the subjectivity and ambiguity of diverse annotations.
UAED achieves superior performance consistently across multiple edge detection benchmarks.
arXiv Detail & Related papers (2023-03-21T13:14:36Z) - Exploiting Completeness and Uncertainty of Pseudo Labels for Weakly
Supervised Video Anomaly Detection [149.23913018423022]
Weakly supervised video anomaly detection aims to identify abnormal events in videos using only video-level labels.
Two-stage self-training methods have achieved significant improvements by self-generating pseudo labels.
We propose an enhancement framework by exploiting completeness and uncertainty properties for effective self-training.
arXiv Detail & Related papers (2022-12-08T05:53:53Z) - Data-Uncertainty Guided Multi-Phase Learning for Semi-Supervised Object
Detection [66.10057490293981]
We propose a data-uncertainty guided multi-phase learning method for semi-supervised object detection.
Our method behaves extraordinarily compared to baseline approaches and outperforms them by a large margin.
arXiv Detail & Related papers (2021-03-29T09:27:23Z) - AutoAssign: Differentiable Label Assignment for Dense Object Detection [94.24431503373884]
Auto COCO is an anchor-free detector for object detection.
It achieves appearance-aware through a fully differentiable weighting mechanism.
Our best model achieves 52.1% AP, outperforming all existing one-stage detectors.
arXiv Detail & Related papers (2020-07-07T14:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.