RINDNet: Edge Detection for Discontinuity in Reflectance, Illumination,
Normal and Depth
- URL: http://arxiv.org/abs/2108.00616v1
- Date: Mon, 2 Aug 2021 03:30:01 GMT
- Title: RINDNet: Edge Detection for Discontinuity in Reflectance, Illumination,
Normal and Depth
- Authors: Mengyang Pu, Yaping Huang, Qingji Guan and Haibin Ling
- Abstract summary: We propose a novel neural network solution, RINDNet, to jointly detect all four types of edges.
RINDNet learns effective representations for each of them and works in three stages.
In our experiments, RINDNet yields promising results in comparison with state-of-the-art methods.
- Score: 70.25160895688464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a fundamental building block in computer vision, edges can be categorised
into four types according to the discontinuity in surface-Reflectance,
Illumination, surface-Normal or Depth. While great progress has been made in
detecting generic or individual types of edges, it remains under-explored to
comprehensively study all four edge types together. In this paper, we propose a
novel neural network solution, RINDNet, to jointly detect all four types of
edges. Taking into consideration the distinct attributes of each type of edges
and the relationship between them, RINDNet learns effective representations for
each of them and works in three stages. In stage I, RINDNet uses a common
backbone to extract features shared by all edges. Then in stage II it branches
to prepare discriminative features for each edge type by the corresponding
decoder. In stage III, an independent decision head for each type aggregates
the features from previous stages to predict the initial results. Additionally,
an attention module learns attention maps for all types to capture the
underlying relations between them, and these maps are combined with initial
results to generate the final edge detection results. For training and
evaluation, we construct the first public benchmark, BSDS-RIND, with all four
types of edges carefully annotated. In our experiments, RINDNet yields
promising results in comparison with state-of-the-art methods. Additional
analysis is presented in supplementary material.
Related papers
- Self-supervised 3D Point Cloud Completion via Multi-view Adversarial Learning [61.14132533712537]
We propose MAL-SPC, a framework that effectively leverages both object-level and category-specific geometric similarities to complete missing structures.
Our MAL-SPC does not require any 3D complete supervision and only necessitates a single partial point cloud for each object.
arXiv Detail & Related papers (2024-07-13T06:53:39Z) - SegVG: Transferring Object Bounding Box to Segmentation for Visual Grounding [56.079013202051094]
We present SegVG, a novel method transfers the box-level annotation as signals to provide an additional pixel-level supervision for Visual Grounding.
This approach allows us to iteratively exploit the annotation as signals for both box-level regression and pixel-level segmentation.
arXiv Detail & Related papers (2024-07-03T15:30:45Z) - ECT: Fine-grained Edge Detection with Learned Cause Tokens [19.271691951077617]
We propose a two-stage transformer-based network sequentially predicting generic edges and fine-grained edges.
We evaluate our method on the public benchmark BSDS-RIND and several newly derived benchmarks, and achieve new state-of-the-art results.
arXiv Detail & Related papers (2023-08-06T11:37:55Z) - Edge-Aware Mirror Network for Camouflaged Object Detection [5.032585246295627]
We propose a novel Edge-aware Mirror Network (EAMNet) to model edge detection and camouflaged object segmentation.
EAMNet has a two-branch architecture, where a segmentation-induced edge aggregation module and an edge-induced integrity aggregation module are designed to cross-guide the segmentation branch and edge detection branch.
Experiment results show that EAMNet outperforms existing cutting-edge baselines on three widely used COD datasets.
arXiv Detail & Related papers (2023-07-08T08:14:49Z) - Refined Edge Usage of Graph Neural Networks for Edge Prediction [51.06557652109059]
We propose a novel edge prediction paradigm named Edge-aware Message PassIng neuRal nEtworks (EMPIRE)
We first introduce an edge splitting technique to specify use of each edge where each edge is solely used as either the topology or the supervision.
In order to emphasize the differences between pairs connected by supervision edges and pairs unconnected, we further weight the messages to highlight the relative ones that can reflect the differences.
arXiv Detail & Related papers (2022-12-25T23:19:56Z) - Dense Extreme Inception Network for Edge Detection [0.0]
Edge detection is the basis of many computer vision applications.
Most of the publicly available datasets are not curated for edge detection tasks.
We present a new dataset of edges.
We propose a novel architecture, termed Dense Extreme Inception Network for Edge Detection (DexiNed)
arXiv Detail & Related papers (2021-12-04T05:38:50Z) - AttrE2vec: Unsupervised Attributed Edge Representation Learning [22.774159996012276]
This paper proposes a novel unsupervised inductive method called AttrE2Vec, which learns a low-dimensional vector representation for edges in attributed networks.
Experimental results show that, compared to contemporary approaches, our method builds more powerful edge vector representations.
arXiv Detail & Related papers (2020-12-29T12:20:49Z) - PIE-NET: Parametric Inference of Point Cloud Edges [40.27043782820615]
We introduce an end-to-end learnable technique to robustly identify feature edges in 3D point cloud data.
Our deep neural network, coined PIE-NET, is trained for parametric inference of edges.
arXiv Detail & Related papers (2020-07-09T15:35:10Z) - Saliency Enhancement using Gradient Domain Edges Merging [65.90255950853674]
We develop a method to merge the edges with the saliency maps to improve the performance of the saliency.
This leads to our proposed saliency enhancement using edges (SEE) with an average improvement of at least 3.4 times higher on the DUT-OMRON dataset.
The SEE algorithm is split into 2 parts, SEE-Pre for preprocessing and SEE-Post pour postprocessing.
arXiv Detail & Related papers (2020-02-11T14:04:56Z) - Learning multiview 3D point cloud registration [74.39499501822682]
We present a novel, end-to-end learnable, multiview 3D point cloud registration algorithm.
Our approach outperforms the state-of-the-art by a significant margin, while being end-to-end trainable and computationally less costly.
arXiv Detail & Related papers (2020-01-15T03:42:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.