SuperEdge: Towards a Generalization Model for Self-Supervised Edge
Detection
- URL: http://arxiv.org/abs/2401.02313v1
- Date: Thu, 4 Jan 2024 15:21:53 GMT
- Title: SuperEdge: Towards a Generalization Model for Self-Supervised Edge
Detection
- Authors: Leng Kai and Zhang Zhijie and Liu Jie and Zed Boukhers and Sui Wei and
Cong Yang and Li Zhijun
- Abstract summary: State-of-the-art pixel-wise annotations are labor-intensive and subject to inconsistencies when acquired manually.
We propose a novel self-supervised approach for edge detection that employs a multi-level, multi-homography technique to transfer annotations from synthetic to real-world datasets.
Our method eliminates the dependency on manual annotated edge labels, thereby enhancing its generalizability across diverse datasets.
- Score: 2.912976132828368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Edge detection is a fundamental technique in various computer vision tasks.
Edges are indeed effectively delineated by pixel discontinuity and can offer
reliable structural information even in textureless areas. State-of-the-art
heavily relies on pixel-wise annotations, which are labor-intensive and subject
to inconsistencies when acquired manually. In this work, we propose a novel
self-supervised approach for edge detection that employs a multi-level,
multi-homography technique to transfer annotations from synthetic to real-world
datasets. To fully leverage the generated edge annotations, we developed
SuperEdge, a streamlined yet efficient model capable of concurrently extracting
edges at pixel-level and object-level granularity. Thanks to self-supervised
training, our method eliminates the dependency on manual annotated edge labels,
thereby enhancing its generalizability across diverse datasets. Comparative
evaluations reveal that SuperEdge advances edge detection, demonstrating
improvements of 4.9% in ODS and 3.3% in OIS over the existing STEdge method on
BIPEDv2.
Related papers
- Generative Edge Detection with Stable Diffusion [52.870631376660924]
Edge detection is typically viewed as a pixel-level classification problem mainly addressed by discriminative methods.
We propose a novel approach, named Generative Edge Detector (GED), by fully utilizing the potential of the pre-trained stable diffusion model.
We conduct extensive experiments on multiple datasets and achieve competitive performance.
arXiv Detail & Related papers (2024-10-04T01:52:23Z) - Learning to utilize image second-order derivative information for crisp edge detection [13.848361661516595]
Edge detection is a fundamental task in computer vision.
Recent top-performing edge detection methods tend to generate thick and noisy edge lines.
We propose a second-order derivative-based multi-scale contextual enhancement module (SDMCM) to help the model locate true edge pixels accurately.
We also construct a hybrid focal loss function (HFL) to alleviate the imbalanced distribution issue.
In the end, we propose a U-shape network named LUS-Net which is based on the SDMCM and BRM for edge detection.
arXiv Detail & Related papers (2024-06-09T13:25:02Z) - GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - Improving Online Lane Graph Extraction by Object-Lane Clustering [106.71926896061686]
We propose an architecture and loss formulation to improve the accuracy of local lane graph estimates.
The proposed method learns to assign the objects to centerlines by considering the centerlines as cluster centers.
We show that our method can achieve significant performance improvements by using the outputs of existing 3D object detection methods.
arXiv Detail & Related papers (2023-07-20T15:21:28Z) - Edge-aware Plug-and-play Scheme for Semantic Segmentation [4.297988192695948]
The proposed method can be seamlessly integrated into any state-of-the-art (SOTA) models with zero modification.
The experimental results indicate that the proposed method can be seamlessly integrated into any state-of-the-art (SOTA) models with zero modification.
arXiv Detail & Related papers (2023-03-18T02:17:37Z) - Synthesize Boundaries: A Boundary-aware Self-consistent Framework for
Weakly Supervised Salient Object Detection [8.951168425295378]
We propose to learn precise boundaries from our designed synthetic images and labels.
The synthetic image creates boundary information by inserting synthetic concave regions that simulate the real concave regions of salient objects.
We also propose a novel self-consistent framework that consists of a global integral branch (GIB) and a boundary-aware branch (BAB) to train a saliency detector.
arXiv Detail & Related papers (2022-12-04T08:22:45Z) - Image Understands Point Cloud: Weakly Supervised 3D Semantic
Segmentation via Association Learning [59.64695628433855]
We propose a novel cross-modality weakly supervised method for 3D segmentation, incorporating complementary information from unlabeled images.
Basically, we design a dual-branch network equipped with an active labeling strategy, to maximize the power of tiny parts of labels.
Our method even outperforms the state-of-the-art fully supervised competitors with less than 1% actively selected annotations.
arXiv Detail & Related papers (2022-09-16T07:59:04Z) - STEdge: Self-training Edge Detection with Multi-layer Teaching and
Regularization [15.579360385857129]
We study the problem of self-training edge detection, leveraging the untapped wealth of large-scale unlabeled image datasets.
We design a self-supervised framework with multi-layer regularization and self-teaching.
Our method attains 4.8% improvement for ODS and 5.8% for OIS when tested on the unseen BIPED dataset.
arXiv Detail & Related papers (2022-01-13T18:26:36Z) - AttrE2vec: Unsupervised Attributed Edge Representation Learning [22.774159996012276]
This paper proposes a novel unsupervised inductive method called AttrE2Vec, which learns a low-dimensional vector representation for edges in attributed networks.
Experimental results show that, compared to contemporary approaches, our method builds more powerful edge vector representations.
arXiv Detail & Related papers (2020-12-29T12:20:49Z) - Refined Plane Segmentation for Cuboid-Shaped Objects by Leveraging Edge
Detection [63.942632088208505]
We propose a post-processing algorithm to align the segmented plane masks with edges detected in the image.
This allows us to increase the accuracy of state-of-the-art approaches, while limiting ourselves to cuboid-shaped objects.
arXiv Detail & Related papers (2020-03-28T18:51:43Z) - Saliency Enhancement using Gradient Domain Edges Merging [65.90255950853674]
We develop a method to merge the edges with the saliency maps to improve the performance of the saliency.
This leads to our proposed saliency enhancement using edges (SEE) with an average improvement of at least 3.4 times higher on the DUT-OMRON dataset.
The SEE algorithm is split into 2 parts, SEE-Pre for preprocessing and SEE-Post pour postprocessing.
arXiv Detail & Related papers (2020-02-11T14:04:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.