ACFNet: Adaptively-Cooperative Fusion Network for RGB-D Salient Object
Detection
- URL: http://arxiv.org/abs/2109.04627v1
- Date: Fri, 10 Sep 2021 02:34:27 GMT
- Title: ACFNet: Adaptively-Cooperative Fusion Network for RGB-D Salient Object
Detection
- Authors: Jinchao Zhu
- Abstract summary: We propose an adaptively-cooperative fusion network (ACFNet) with ResinRes structure for salient object detection.
For different objects, the features generated by different types of convolution are enhanced or suppressed by the gated mechanism for segmentation optimization.
Sufficient experiments conducted on RGB-D SOD datasets illustrate that the proposed network performs favorably against 18 state-of-the-art algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The reasonable employment of RGB and depth data show great significance in
promoting the development of computer vision tasks and robot-environment
interaction. However, there are different advantages and disadvantages in the
early and late fusion of the two types of data. Besides, due to the diversity
of object information, using a single type of data in a specific scenario tends
to result in semantic misleading. Based on the above considerations, we propose
an adaptively-cooperative fusion network (ACFNet) with ResinRes structure for
salient object detection. This structure is designed to flexibly utilize the
advantages of feature fusion in early and late stages. Secondly, an
adaptively-cooperative semantic guidance (ACG) scheme is designed to suppress
inaccurate features in the guidance phase. Further, we proposed a type-based
attention module (TAM) to optimize the network and enhance the multi-scale
perception of different objects. For different objects, the features generated
by different types of convolution are enhanced or suppressed by the gated
mechanism for segmentation optimization. ACG and TAM optimize the transfer of
feature streams according to their data attributes and convolution attributes,
respectively. Sufficient experiments conducted on RGB-D SOD datasets illustrate
that the proposed network performs favorably against 18 state-of-the-art
algorithms.
Related papers
- Point-aware Interaction and CNN-induced Refinement Network for RGB-D Salient Object Detection [95.84616822805664]
We introduce CNNs-assisted Transformer architecture and propose a novel RGB-D SOD network with Point-aware Interaction and CNN-induced Refinement.
In order to alleviate the block effect and detail destruction problems brought by the Transformer naturally, we design a CNN-induced refinement (CNNR) unit for content refinement and supplementation.
arXiv Detail & Related papers (2023-08-17T11:57:49Z) - ICAFusion: Iterative Cross-Attention Guided Feature Fusion for
Multispectral Object Detection [25.66305300362193]
A novel feature fusion framework of dual cross-attention transformers is proposed to model global feature interaction.
This framework enhances the discriminability of object features through the query-guided cross-attention mechanism.
The proposed method achieves superior performance and faster inference, making it suitable for various practical scenarios.
arXiv Detail & Related papers (2023-08-15T00:02:10Z) - Feature Aggregation and Propagation Network for Camouflaged Object
Detection [42.33180748293329]
Camouflaged object detection (COD) aims to detect/segment camouflaged objects embedded in the environment.
Several COD methods have been developed, but they still suffer from unsatisfactory performance due to intrinsic similarities between foreground objects and background surroundings.
We propose a novel Feature Aggregation and propagation Network (FAP-Net) for camouflaged object detection.
arXiv Detail & Related papers (2022-12-02T05:54:28Z) - Dual Swin-Transformer based Mutual Interactive Network for RGB-D Salient
Object Detection [67.33924278729903]
In this work, we propose Dual Swin-Transformer based Mutual Interactive Network.
We adopt Swin-Transformer as the feature extractor for both RGB and depth modality to model the long-range dependencies in visual inputs.
Comprehensive experiments on five standard RGB-D SOD benchmark datasets demonstrate the superiority of the proposed DTMINet method.
arXiv Detail & Related papers (2022-06-07T08:35:41Z) - Transformer-based Network for RGB-D Saliency Detection [82.6665619584628]
Key to RGB-D saliency detection is to fully mine and fuse information at multiple scales across the two modalities.
We show that transformer is a uniform operation which presents great efficacy in both feature fusion and feature enhancement.
Our proposed network performs favorably against state-of-the-art RGB-D saliency detection methods.
arXiv Detail & Related papers (2021-12-01T15:53:58Z) - Perception-and-Regulation Network for Salient Object Detection [8.026227647732792]
We propose a novel global attention unit that adaptively regulates the feature fusion process by explicitly modeling interdependencies between features.
The perception part uses the structure of fully-connected layers in classification networks to learn the size and shape of objects.
An imitating eye observation module (IEO) is further employed for improving the global perception ability of the network.
arXiv Detail & Related papers (2021-07-27T02:38:40Z) - Deep feature selection-and-fusion for RGB-D semantic segmentation [8.831857715361624]
This work proposes a unified and efficient feature selectionand-fusion network (FSFNet)
FSFNet contains a symmetric cross-modality residual fusion module used for explicit fusion of multi-modality information.
Compared with the state-of-the-art methods, experimental evaluations demonstrate that the proposed model achieves competitive performance on two public datasets.
arXiv Detail & Related papers (2021-05-10T04:02:32Z) - Self-Supervised Representation Learning for RGB-D Salient Object
Detection [93.17479956795862]
We use Self-Supervised Representation Learning to design two pretext tasks: the cross-modal auto-encoder and the depth-contour estimation.
Our pretext tasks require only a few and un RGB-D datasets to perform pre-training, which make the network capture rich semantic contexts.
For the inherent problem of cross-modal fusion in RGB-D SOD, we propose a multi-path fusion module.
arXiv Detail & Related papers (2021-01-29T09:16:06Z) - Bi-directional Cross-Modality Feature Propagation with
Separation-and-Aggregation Gate for RGB-D Semantic Segmentation [59.94819184452694]
Depth information has proven to be a useful cue in the semantic segmentation of RGBD images for providing a geometric counterpart to the RGB representation.
Most existing works simply assume that depth measurements are accurate and well-aligned with the RGB pixels and models the problem as a cross-modal feature fusion.
In this paper, we propose a unified and efficient Crossmodality Guided to not only effectively recalibrate RGB feature responses, but also to distill accurate depth information via multiple stages and aggregate the two recalibrated representations alternatively.
arXiv Detail & Related papers (2020-07-17T18:35:24Z) - Hierarchical Dynamic Filtering Network for RGB-D Salient Object
Detection [91.43066633305662]
The main purpose of RGB-D salient object detection (SOD) is how to better integrate and utilize cross-modal fusion information.
In this paper, we explore these issues from a new perspective.
We implement a kind of more flexible and efficient multi-scale cross-modal feature processing.
arXiv Detail & Related papers (2020-07-13T07:59:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.