Cycle Pixel Difference Network for Crisp Edge Detection
- URL: http://arxiv.org/abs/2409.04272v2
- Date: Thu, 19 Dec 2024 15:02:37 GMT
- Title: Cycle Pixel Difference Network for Crisp Edge Detection
- Authors: Changsong Liu, Wei Zhang, Yanyan Liu, Mingyang Li, Wenlin Li, Yimeng Fan, Xiangnan Bai, Liang Zhang,
- Abstract summary: Edge detection is a fundamental task in computer vision.<n>Recent deep learning-based methods face two significant issues: 1) reliance on large-scale pre-trained weights, and 2) generation of thick edges.<n>We construct a U-shape encoder-decoder model named CPD-Net that successfully addresses these two issues simultaneously.
- Score: 14.625034156501778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Edge detection, as a fundamental task in computer vision, has garnered increasing attention. The advent of deep learning has significantly advanced this field. However, recent deep learning-based methods generally face two significant issues: 1) reliance on large-scale pre-trained weights, and 2) generation of thick edges. We construct a U-shape encoder-decoder model named CPD-Net that successfully addresses these two issues simultaneously. In response to issue 1), we propose a novel cycle pixel difference convolution (CPDC), which effectively integrates edge prior knowledge with modern convolution operations, consequently successfully eliminating the dependence on large-scale pre-trained weights. As for issue 2), we construct a multi-scale information enhancement module (MSEM) and a dual residual connection-based (DRC) decoder to enhance the edge location ability of the model, thereby generating crisp and clean contour maps. Comprehensive experiments conducted on four standard benchmarks demonstrate that our method achieves competitive performance on the BSDS500 dataset (ODS=0.813 and AC=0.352), NYUD-V2 (ODS=0.760 and AC=0.223), BIPED dataset (ODS=0.898 and AC=0.426), and CID (ODS=0.59). Our approach provides a novel perspective for addressing these challenges in edge detection.
Related papers
- Learning to utilize image second-order derivative information for crisp edge detection [13.848361661516595]
Edge detection is a fundamental task in computer vision.
Recent top-performing edge detection methods tend to generate thick and noisy edge lines.
We propose a second-order derivative-based multi-scale contextual enhancement module (SDMCM) to help the model locate true edge pixels accurately.
We also construct a hybrid focal loss function (HFL) to alleviate the imbalanced distribution issue.
In the end, we propose a U-shape network named LUS-Net which is based on the SDMCM and BRM for edge detection.
arXiv Detail & Related papers (2024-06-09T13:25:02Z) - 3D Neural Edge Reconstruction [61.10201396044153]
We introduce EMAP, a new method for learning 3D edge representations with a focus on both lines and curves.
Our method implicitly encodes 3D edge distance and direction in Unsigned Distance Functions (UDF) from multi-view edge maps.
On top of this neural representation, we propose an edge extraction algorithm that robustly abstracts 3D edges from the inferred edge points and their directions.
arXiv Detail & Related papers (2024-05-29T17:23:51Z) - Edge Detectors Can Make Deep Convolutional Neural Networks More Robust [25.871767605100636]
This paper first employs the edge detectors as layer kernels and designs a binary edge feature branch (BEFB) to learn the binary edge features.
The accuracy of the BEFB integrated models is better than the original ones on all datasets when facing FGSM, PGD, and C&W attacks.
The work in this paper for the first time shows it is feasible to enhance the robustness of DCNNs through combining both shape-like features and texture features.
arXiv Detail & Related papers (2024-02-26T10:54:26Z) - Practical Edge Detection via Robust Collaborative Learning [11.176517889212015]
Edge detection is a core component in a wide range of visionoriented tasks.
To achieve the goal, two key issues should be concerned.
How to mitigate deep edge models from inefficient pre-trained backbones.
How to liberate the negative influence from noisy or even wrong labels in training data.
arXiv Detail & Related papers (2023-08-27T12:12:27Z) - Geometric-aware Pretraining for Vision-centric 3D Object Detection [77.7979088689944]
We propose a novel geometric-aware pretraining framework called GAPretrain.
GAPretrain serves as a plug-and-play solution that can be flexibly applied to multiple state-of-the-art detectors.
We achieve 46.2 mAP and 55.5 NDS on the nuScenes val set using the BEVFormer method, with a gain of 2.7 and 2.1 points, respectively.
arXiv Detail & Related papers (2023-04-06T14:33:05Z) - Introducing Depth into Transformer-based 3D Object Detection [24.224177932086455]
We present a Depth-Aware Transformer framework designed for camera-based 3D detection.
We show that DAT achieves a significant improvement of +2.8 NDS on nuScenes val under the same settings.
When using pre-trained VoVNet-99 as the backbone, DAT achieves strong results of 60.0 NDS and 51.5 mAP on nuScenes test.
arXiv Detail & Related papers (2023-02-25T06:28:32Z) - Lightweight Monocular Depth Estimation with an Edge Guided Network [34.03711454383413]
We present a novel lightweight Edge Guided Depth Estimation Network (EGD-Net)
In particular, we start out with a lightweight encoder-decoder architecture and embed an edge guidance branch.
In order to aggregate the context information and edge attention features, we design a transformer-based feature aggregation module.
arXiv Detail & Related papers (2022-09-29T14:45:47Z) - Rethinking Unsupervised Neural Superpixel Segmentation [6.123324869194195]
unsupervised learning for superpixel segmentation via CNNs has been studied.
We propose three key elements to improve the efficacy of such networks.
By experimenting with the BSDS500 dataset, we find evidence to the significance of our proposal.
arXiv Detail & Related papers (2022-06-21T09:30:26Z) - Point-to-Voxel Knowledge Distillation for LiDAR Semantic Segmentation [74.67594286008317]
This article addresses the problem of distilling knowledge from a large teacher model to a slim student network for LiDAR semantic segmentation.
We propose the Point-to-Voxel Knowledge Distillation (PVD), which transfers the hidden knowledge from both point level and voxel level.
arXiv Detail & Related papers (2022-06-05T05:28:32Z) - Pyramid Grafting Network for One-Stage High Resolution Saliency
Detection [29.013012579688347]
We propose a one-stage framework called Pyramid Grafting Network (PGNet) to extract features from different resolution images independently.
An attention-based Cross-Model Grafting Module (CMGM) is proposed to enable CNN branch to combine broken detailed information more holistically.
We contribute a new Ultra-High-Resolution Saliency Detection dataset UHRSD, containing 5,920 images at 4K-8K resolutions.
arXiv Detail & Related papers (2022-04-11T12:22:21Z) - Proximal PanNet: A Model-Based Deep Network for Pansharpening [11.695233311615498]
We propose a novel deep network for pansharpening by combining the model-based methodology with the deep learning method.
We unfold the iterative algorithm into a deep network, dubbed as Proximal PanNet, by learning the proximal operators using convolutional neural networks.
Experimental results on some benchmark datasets show that our network performs better than other advanced methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-02-12T15:49:13Z) - Weakly Supervised Change Detection Using Guided Anisotropic Difusion [97.43170678509478]
We propose original ideas that help us to leverage such datasets in the context of change detection.
First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results.
We then show its potential in two weakly-supervised learning strategies tailored for change detection.
arXiv Detail & Related papers (2021-12-31T10:03:47Z) - Point Discriminative Learning for Unsupervised Representation Learning
on 3D Point Clouds [54.31515001741987]
We propose a point discriminative learning method for unsupervised representation learning on 3D point clouds.
We achieve this by imposing a novel point discrimination loss on the middle level and global level point features.
Our method learns powerful representations and achieves new state-of-the-art performance.
arXiv Detail & Related papers (2021-08-04T15:11:48Z) - Disentangle Your Dense Object Detector [82.22771433419727]
Deep learning-based dense object detectors have achieved great success in the past few years and have been applied to numerous multimedia applications such as video understanding.
However, the current training pipeline for dense detectors is compromised to lots of conjunctions that may not hold.
We propose Disentangled Dense Object Detector (DDOD), in which simple and effective disentanglement mechanisms are designed and integrated into the current state-of-the-art detectors.
arXiv Detail & Related papers (2021-07-07T00:52:16Z) - Densely Nested Top-Down Flows for Salient Object Detection [137.74130900326833]
This paper revisits the role of top-down modeling in salient object detection.
It designs a novel densely nested top-down flows (DNTDF)-based framework.
In every stage of DNTDF, features from higher levels are read in via the progressive compression shortcut paths (PCSP)
arXiv Detail & Related papers (2021-02-18T03:14:02Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z) - Compressive spectral image classification using 3D coded convolutional
neural network [12.67293744927537]
This paper develops a novel deep learning HIC approach based on measurements of coded-aperture snapshot spectral imagers (CASSI)
A new kind of deep learning strategy, namely 3D coded convolutional neural network (3D-CCNN), is proposed to efficiently solve for the classification problem.
The accuracy of classification is effectively improved by exploiting the synergy between the deep learning network and coded apertures.
arXiv Detail & Related papers (2020-09-23T15:05:57Z) - Anchor-free Small-scale Multispectral Pedestrian Detection [88.7497134369344]
We propose a method for effective and efficient multispectral fusion of the two modalities in an adapted single-stage anchor-free base architecture.
We aim at learning pedestrian representations based on object center and scale rather than direct bounding box predictions.
Results show our method's effectiveness in detecting small-scaled pedestrians.
arXiv Detail & Related papers (2020-08-19T13:13:01Z) - 2nd Place Scheme on Action Recognition Track of ECCV 2020 VIPriors
Challenges: An Efficient Optical Flow Stream Guided Framework [57.847010327319964]
We propose a data-efficient framework that can train the model from scratch on small datasets.
Specifically, by introducing a 3D central difference convolution operation, we proposed a novel C3D neural network-based two-stream framework.
It is proved that our method can achieve a promising result even without a pre-trained model on large scale datasets.
arXiv Detail & Related papers (2020-08-10T09:50:28Z) - Searching Central Difference Convolutional Networks for Face
Anti-Spoofing [68.77468465774267]
Face anti-spoofing (FAS) plays a vital role in face recognition systems.
Most state-of-the-art FAS methods rely on stacked convolutions and expert-designed network.
Here we propose a novel frame level FAS method based on Central Difference Convolution (CDC)
arXiv Detail & Related papers (2020-03-09T12:48:37Z) - Towards High Performance Human Keypoint Detection [87.1034745775229]
We find that context information plays an important role in reasoning human body configuration and invisible keypoints.
Inspired by this, we propose a cascaded context mixer ( CCM) which efficiently integrates spatial and channel context information.
To maximize CCM's representation capability, we develop a hard-negative person detection mining strategy and a joint-training strategy.
We present several sub-pixel refinement techniques for postprocessing keypoint predictions to improve detection accuracy.
arXiv Detail & Related papers (2020-02-03T02:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.