Learning Crisp Edge Detector Using Logical Refinement Network
- URL: http://arxiv.org/abs/2007.12449v1
- Date: Fri, 24 Jul 2020 11:12:48 GMT
- Title: Learning Crisp Edge Detector Using Logical Refinement Network
- Authors: Luyan Liu, Kai Ma, Yefeng Zheng
- Abstract summary: We propose a novel logical refinement network for crisp edge detection, which is motivated by the logical relationship between segmentation and edge maps.
The network consists of a joint object and edge detection network and a crisp edge refinement network, which predicts more accurate, clearer and thinner high quality binary edge maps.
- Score: 29.59728791893451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Edge detection is a fundamental problem in different computer vision tasks.
Recently, edge detection algorithms achieve satisfying improvement built upon
deep learning. Although most of them report favorable evaluation scores, they
often fail to accurately localize edges and give thick and blurry boundaries.
In addition, most of them focus on 2D images and the challenging 3D edge
detection is still under-explored. In this work, we propose a novel logical
refinement network for crisp edge detection, which is motivated by the logical
relationship between segmentation and edge maps and can be applied to both 2D
and 3D images. The network consists of a joint object and edge detection
network and a crisp edge refinement network, which predicts more accurate,
clearer and thinner high quality binary edge maps without any post-processing.
Extensive experiments are conducted on the 2D nuclei images from Kaggle 2018
Data Science Bowl and a private 3D microscopy images of a monkey brain, which
show outstanding performance compared with state-of-the-art methods.
Related papers
- Learning to utilize image second-order derivative information for crisp edge detection [13.848361661516595]
Edge detection is a fundamental task in computer vision.
Recent top-performing edge detection methods tend to generate thick and noisy edge lines.
We propose a second-order derivative-based multi-scale contextual enhancement module (SDMCM) to help the model locate true edge pixels accurately.
We also construct a hybrid focal loss function (HFL) to alleviate the imbalanced distribution issue.
In the end, we propose a U-shape network named LUS-Net which is based on the SDMCM and BRM for edge detection.
arXiv Detail & Related papers (2024-06-09T13:25:02Z) - PointMCD: Boosting Deep Point Cloud Encoders via Multi-view Cross-modal
Distillation for 3D Shape Recognition [55.38462937452363]
We propose a unified multi-view cross-modal distillation architecture, including a pretrained deep image encoder as the teacher and a deep point encoder as the student.
By pair-wise aligning multi-view visual and geometric descriptors, we can obtain more powerful deep point encoders without exhausting and complicated network modification.
arXiv Detail & Related papers (2022-07-07T07:23:20Z) - DetMatch: Two Teachers are Better Than One for Joint 2D and 3D
Semi-Supervised Object Detection [29.722784254501768]
DetMatch is a flexible framework for joint semi-supervised learning on 2D and 3D modalities.
By identifying objects detected in both sensors, our pipeline generates a cleaner, more robust set of pseudo-labels.
We leverage the richer semantics of RGB images to rectify incorrect 3D class predictions and improve localization of 3D boxes.
arXiv Detail & Related papers (2022-03-17T17:58:00Z) - Joint Deep Multi-Graph Matching and 3D Geometry Learning from
Inhomogeneous 2D Image Collections [57.60094385551773]
We propose a trainable framework for learning a deformable 3D geometry model from inhomogeneous image collections.
We in addition obtain the underlying 3D geometry of the objects depicted in the 2D images.
arXiv Detail & Related papers (2021-03-31T17:25:36Z) - Learning Joint 2D-3D Representations for Depth Completion [90.62843376586216]
We design a simple yet effective neural network block that learns to extract joint 2D and 3D features.
Specifically, the block consists of two domain-specific sub-networks that apply 2D convolution on image pixels and continuous convolution on 3D points.
arXiv Detail & Related papers (2020-12-22T22:58:29Z) - Cross-Modality 3D Object Detection [63.29935886648709]
We present a novel two-stage multi-modal fusion network for 3D object detection.
The whole architecture facilitates two-stage fusion.
Our experiments on the KITTI dataset show that the proposed multi-stage fusion helps the network to learn better representations.
arXiv Detail & Related papers (2020-08-16T11:01:20Z) - JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D
Point Clouds [37.703770427574476]
In this paper, we tackle the 3D semantic edge detection task for the first time.
We present a new two-stream fully-convolutional network that jointly performs the two tasks.
In particular, we design a joint refinement module that explicitly wires region information and edge information to improve the performances of both tasks.
arXiv Detail & Related papers (2020-07-14T08:00:35Z) - Learning Depth With Very Sparse Supervision [57.911425589947314]
This paper explores the idea that perception gets coupled to 3D properties of the world via interaction with the environment.
We train a specialized global-local network architecture with what would be available to a robot interacting with the environment.
Experiments on several datasets show that, when ground truth is available even for just one of the image pixels, the proposed network can learn monocular dense depth estimation up to 22.5% more accurately than state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-02T10:44:13Z) - Saliency Enhancement using Gradient Domain Edges Merging [65.90255950853674]
We develop a method to merge the edges with the saliency maps to improve the performance of the saliency.
This leads to our proposed saliency enhancement using edges (SEE) with an average improvement of at least 3.4 times higher on the DUT-OMRON dataset.
The SEE algorithm is split into 2 parts, SEE-Pre for preprocessing and SEE-Post pour postprocessing.
arXiv Detail & Related papers (2020-02-11T14:04:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.