BiconNet: An Edge-preserved Connectivity-based Approach for Salient
  Object Detection
        - URL: http://arxiv.org/abs/2103.00334v1
 - Date: Sat, 27 Feb 2021 21:39:04 GMT
 - Title: BiconNet: An Edge-preserved Connectivity-based Approach for Salient
  Object Detection
 - Authors: Ziyun Yang, Somayyeh Soltanian-Zadeh, Sina Farsiu
 - Abstract summary: We show that our model can use any existing saliency-based SOD framework as its backbone.
Through comprehensive experiments on five benchmark datasets, we demonstrate that our proposed method outperforms state-of-the-art SOD approaches.
 - Score: 3.3517146652431378
 - License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
 - Abstract:   Salient object detection (SOD) is viewed as a pixel-wise saliency modeling
task by traditional deep learning-based methods. Although great progress has
been made, a challenge of modern SOD models is the insufficient utilization of
inter-pixel information, which usually results in imperfect segmentations near
the edge regions. As we demonstrate, using a saliency map as the network output
is a sub-optimal choice. To address this problem, we propose a
connectivity-based approach named bilateral connectivity network (BiconNet),
which uses a connectivity map instead of a saliency map as the network output
for effective modeling of inter-pixel relationships and object saliency.
Moreover, we propose a bilateral voting module to enhance the output
connectivity map and a novel edge feature enhancement method that efficiently
utilizes edge-specific features with negligible parameter increase. We show
that our model can use any existing saliency-based SOD framework as its
backbone. Through comprehensive experiments on five benchmark datasets, we
demonstrate that our proposed method outperforms state-of-the-art SOD
approaches.
 
       
      
        Related papers
        - B2Net: Camouflaged Object Detection via Boundary Aware and Boundary   Fusion [10.899493419708651]
We propose a novel network named B2Net to enhance the accuracy of obtained boundaries.
We present a Residual Feature Enhanced Module (RFEM) with the goal of integrating more discriminative feature representations.
After that, the Boundary Aware Module (BAM) is introduced to explore edge cues twice.
Finally, we design the Cross-scale Boundary Fusion Module(CBFM) that integrate information across different scales in a top-down manner.
arXiv  Detail & Related papers  (2024-12-31T13:06:06Z) - Efficient Detection Framework Adaptation for Edge Computing: A   Plug-and-play Neural Network Toolbox Enabling Edge Deployment [59.61554561979589]
Edge computing has emerged as a key paradigm for deploying deep learning-based object detection in time-sensitive scenarios.
Existing edge detection methods face challenges: difficulty balancing detection precision with lightweight models, limited adaptability, and insufficient real-world validation.
We propose the Edge Detection Toolbox (ED-TOOLBOX), which utilizes generalizable plug-and-play components to adapt object detection models for edge environments.
arXiv  Detail & Related papers  (2024-12-24T07:28:10Z) - CCSPNet-Joint: Efficient Joint Training Method for Traffic Sign
  Detection Under Extreme Conditions [3.6190463374643795]
CCSPNet is an efficient feature extraction module based on Contextual Transformer and CNN.
We propose a joint training model, CCSPNet-Joint, to improve data efficiency and generalization.
Experiments have shown that CCSPNet achieves state-of-the-art performance in traffic sign detection under extreme conditions.
arXiv  Detail & Related papers  (2023-09-13T12:00:33Z) - Feature Aggregation and Propagation Network for Camouflaged Object
  Detection [42.33180748293329]
Camouflaged object detection (COD) aims to detect/segment camouflaged objects embedded in the environment.
Several COD methods have been developed, but they still suffer from unsatisfactory performance due to intrinsic similarities between foreground objects and background surroundings.
We propose a novel Feature Aggregation and  propagation Network (FAP-Net) for camouflaged object detection.
arXiv  Detail & Related papers  (2022-12-02T05:54:28Z) - Position-Aware Relation Learning for RGB-Thermal Salient Object
  Detection [3.115635707192086]
We propose a position-aware relation learning network (PRLNet) for RGB-T SOD based on swin transformer.
PRLNet explores the distance and direction relationships between pixels to strengthen intra-class compactness and inter-class separation.
In addition, we constitute a pure transformer encoder-decoder network to enhance multispectral feature representation for RGB-T SOD.
arXiv  Detail & Related papers  (2022-09-21T07:34:30Z) - Road detection via a dual-task network based on cross-layer graph fusion
  modules [2.8197257696982287]
We propose a dual-task network (DTnet) for road detection and cross-layer graph fusion module (CGM)
CGM improves the cross-layer fusion effect by a complex feature stream graph, and four graph patterns are evaluated.
arXiv  Detail & Related papers  (2022-08-17T07:16:55Z) - Entity-Graph Enhanced Cross-Modal Pretraining for Instance-level Product
  Retrieval [152.3504607706575]
This research aims to conduct weakly-supervised multi-modal instance-level product retrieval for fine-grained product categories.
We first contribute the Product1M datasets, and define two real practical instance-level retrieval tasks.
We exploit to train a more effective cross-modal model which is adaptively capable of incorporating key concept information from the multi-modal data.
arXiv  Detail & Related papers  (2022-06-17T15:40:45Z) - Aerial Images Meet Crowdsourced Trajectories: A New Approach to Robust
  Road Extraction [110.61383502442598]
We introduce a novel neural network framework termed Cross-Modal Message Propagation Network (CMMPNet)
 CMMPNet is composed of two deep Auto-Encoders for modality-specific representation learning and a tailor-designed Dual Enhancement Module for cross-modal representation refinement.
Experiments on three real-world benchmarks demonstrate the effectiveness of our CMMPNet for robust road extraction.
arXiv  Detail & Related papers  (2021-11-30T04:30:10Z) - Densely Nested Top-Down Flows for Salient Object Detection [137.74130900326833]
This paper revisits the role of top-down modeling in salient object detection.
It designs a novel densely nested top-down flows (DNTDF)-based framework.
In every stage of DNTDF, features from higher levels are read in via the progressive compression shortcut paths (PCSP)
arXiv  Detail & Related papers  (2021-02-18T03:14:02Z) - Centralized Information Interaction for Salient Object Detection [68.8587064889475]
The U-shape structure has shown its advantage in salient object detection for efficiently combining multi-scale features.
This paper shows that by centralizing these connections, we can achieve the cross-scale information interaction among them.
Our approach can cooperate with various existing U-shape-based salient object detection methods by substituting the connections between the bottom-up and top-down pathways.
arXiv  Detail & Related papers  (2020-12-21T12:42:06Z) - PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object
  Detection [57.49788100647103]
LiDAR-based 3D object detection is an important task for autonomous driving.
Current approaches suffer from sparse and partial point clouds of distant and occluded objects.
In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions.
arXiv  Detail & Related papers  (2020-12-18T18:06:43Z) - Saliency Enhancement using Gradient Domain Edges Merging [65.90255950853674]
We develop a method to merge the edges with the saliency maps to improve the performance of the saliency.
This leads to our proposed saliency enhancement using edges (SEE) with an average improvement of at least 3.4 times higher on the DUT-OMRON dataset.
The SEE algorithm is split into 2 parts, SEE-Pre for preprocessing and SEE-Post pour postprocessing.
arXiv  Detail & Related papers  (2020-02-11T14:04:56Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.