Multi-Content Complementation Network for Salient Object Detection in
Optical Remote Sensing Images
- URL: http://arxiv.org/abs/2112.01932v1
- Date: Thu, 2 Dec 2021 04:46:40 GMT
- Title: Multi-Content Complementation Network for Salient Object Detection in
Optical Remote Sensing Images
- Authors: Gongyang Li, Zhi Liu, Weisi Lin, Haibin Ling
- Abstract summary: salient object detection in optical remote sensing images (RSI-SOD) remains to be a challenging emerging topic.
We propose a novel Multi-Content Complementation Network (MCCNet) to explore the complementarity of multiple content for RSI-SOD.
In MCCM, we consider multiple types of features that are critical to RSI-SOD, including foreground features, edge features, background features, and global image-level features.
- Score: 108.79667788962425
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the computer vision community, great progresses have been achieved in
salient object detection from natural scene images (NSI-SOD); by contrast,
salient object detection in optical remote sensing images (RSI-SOD) remains to
be a challenging emerging topic. The unique characteristics of optical RSIs,
such as scales, illuminations and imaging orientations, bring significant
differences between NSI-SOD and RSI-SOD. In this paper, we propose a novel
Multi-Content Complementation Network (MCCNet) to explore the complementarity
of multiple content for RSI-SOD. Specifically, MCCNet is based on the general
encoder-decoder architecture, and contains a novel key component named
Multi-Content Complementation Module (MCCM), which bridges the encoder and the
decoder. In MCCM, we consider multiple types of features that are critical to
RSI-SOD, including foreground features, edge features, background features, and
global image-level features, and exploit the content complementarity between
them to highlight salient regions over various scales in RSI features through
the attention mechanism. Besides, we comprehensively introduce pixel-level,
map-level and metric-aware losses in the training phase. Extensive experiments
on two popular datasets demonstrate that the proposed MCCNet outperforms 23
state-of-the-art methods, including both NSI-SOD and RSI-SOD methods. The code
and results of our method are available at https://github.com/MathLee/MCCNet.
Related papers
- Texture-Semantic Collaboration Network for ORSI Salient Object Detection [13.724588317778753]
We propose a concise yet effective Texture-Semantic Collaboration Network (TSCNet) to explore the collaboration of texture cues and semantic cues for ORSI-SOD.
TSCNet is based on the generic encoder-decoder structure and includes a vital Texture-Semantic Collaboration Module (TSCM)
Our TSCNet achieves competitive performance compared to 14 state-of-the-art methods.
arXiv Detail & Related papers (2023-12-06T15:26:38Z) - A lightweight multi-scale context network for salient object detection
in optical remote sensing images [16.933770557853077]
We propose a multi-scale context network, namely MSCNet, for salient object detection in optical RSIs.
Specifically, a multi-scale context extraction module is adopted to address the scale variation of salient objects.
In order to accurately detect complete salient objects in complex backgrounds, we design an attention-based pyramid feature aggregation mechanism.
arXiv Detail & Related papers (2022-05-18T14:32:47Z) - Adjacent Context Coordination Network for Salient Object Detection in
Optical Remote Sensing Images [102.75699068451166]
We propose a novel Adjacent Context Coordination Network (ACCoNet) to explore the coordination of adjacent features in an encoder-decoder architecture for optical RSI-SOD.
The proposed ACCoNet outperforms 22 state-of-the-art methods under nine evaluation metrics, and runs up to 81 fps on a single NVIDIA Titan X GPU.
arXiv Detail & Related papers (2022-03-25T14:14:55Z) - Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement [58.72667941107544]
A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
arXiv Detail & Related papers (2021-12-09T06:25:30Z) - Specificity-preserving RGB-D Saliency Detection [103.3722116992476]
We propose a specificity-preserving network (SP-Net) for RGB-D saliency detection.
Two modality-specific networks and a shared learning network are adopted to generate individual and shared saliency maps.
Experiments on six benchmark datasets demonstrate that our SP-Net outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2021-08-18T14:14:22Z) - A Parallel Down-Up Fusion Network for Salient Object Detection in
Optical Remote Sensing Images [82.87122287748791]
We propose a novel Parallel Down-up Fusion network (PDF-Net) for salient object detection in optical remote sensing images (RSIs)
It takes full advantage of the in-path low- and high-level features and cross-path multi-resolution features to distinguish diversely scaled salient objects and suppress the cluttered backgrounds.
Experiments on the ORSSD dataset demonstrate that the proposed network is superior to the state-of-the-art approaches both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-10-02T05:27:57Z) - MACU-Net for Semantic Segmentation of Fine-Resolution Remotely Sensed
Images [11.047174552053626]
MACU-Net is a multi-scale skip connected and asymmetric-convolution-based U-Net for fine-resolution remotely sensed images.
Our design has the following advantages: (1) The multi-scale skip connections combine and realign semantic features contained in both low-level and high-level feature maps; (2) the asymmetric convolution block strengthens the feature representation and feature extraction capability of a standard convolution layer.
Experiments conducted on two remotely sensed datasets demonstrate that the proposed MACU-Net transcends the U-Net, U-NetPPL, U-Net 3+, amongst other benchmark approaches.
arXiv Detail & Related papers (2020-07-26T08:56:47Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.