Attention Guided Network for Salient Object Detection in Optical Remote
Sensing Images
- URL: http://arxiv.org/abs/2207.01755v1
- Date: Tue, 5 Jul 2022 01:01:03 GMT
- Title: Attention Guided Network for Salient Object Detection in Optical Remote
Sensing Images
- Authors: Yuhan Lin, Han Sun, Ningzhong Liu, Yetong Bian, Jun Cen, Huiyu Zhou
- Abstract summary: salient object detection in optical remote sensing images (RSI-SOD) is a very difficult task.
We propose a novel Attention Guided Network (AGNet) for SOD in optical RSIs, including position enhancement stage and detail refinement stage.
AGNet achieves competitive performance compared to other state-of-the-art methods.
- Score: 16.933770557853077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the extreme complexity of scale and shape as well as the uncertainty
of the predicted location, salient object detection in optical remote sensing
images (RSI-SOD) is a very difficult task. The existing SOD methods can satisfy
the detection performance for natural scene images, but they are not well
adapted to RSI-SOD due to the above-mentioned image characteristics in remote
sensing images. In this paper, we propose a novel Attention Guided Network
(AGNet) for SOD in optical RSIs, including position enhancement stage and
detail refinement stage. Specifically, the position enhancement stage consists
of a semantic attention module and a contextual attention module to accurately
describe the approximate location of salient objects. The detail refinement
stage uses the proposed self-refinement module to progressively refine the
predicted results under the guidance of attention and reverse attention. In
addition, the hybrid loss is applied to supervise the training of the network,
which can improve the performance of the model from three perspectives of
pixel, region and statistics. Extensive experiments on two popular benchmarks
demonstrate that AGNet achieves competitive performance compared to other
state-of-the-art methods. The code will be available at
https://github.com/NuaaYH/AGNet.
Related papers
- D-YOLO a robust framework for object detection in adverse weather conditions [0.0]
Adverse weather conditions including haze, snow and rain lead to decline in image qualities, which often causes a decline in performance for deep-learning based detection networks.
To better integrate image restoration and object detection tasks, we designed a double-route network with an attention feature fusion module.
We also proposed a subnetwork to provide haze-free features to the detection network. Specifically, our D-YOLO improves the performance of the detection network by minimizing the distance between the clear feature extraction subnetwork and detection network.
arXiv Detail & Related papers (2024-03-14T09:57:15Z) - Frequency Perception Network for Camouflaged Object Detection [51.26386921922031]
We propose a novel learnable and separable frequency perception mechanism driven by the semantic hierarchy in the frequency domain.
Our entire network adopts a two-stage model, including a frequency-guided coarse localization stage and a detail-preserving fine localization stage.
Compared with the currently existing models, our proposed method achieves competitive performance in three popular benchmark datasets.
arXiv Detail & Related papers (2023-08-17T11:30:46Z) - Adaptive Rotated Convolution for Rotated Object Detection [96.94590550217718]
We present Adaptive Rotated Convolution (ARC) module to handle rotated object detection problem.
In our ARC module, the convolution kernels rotate adaptively to extract object features with varying orientations in different images.
The proposed approach achieves state-of-the-art performance on the DOTA dataset with 81.77% mAP.
arXiv Detail & Related papers (2023-03-14T11:53:12Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - A lightweight multi-scale context network for salient object detection
in optical remote sensing images [16.933770557853077]
We propose a multi-scale context network, namely MSCNet, for salient object detection in optical RSIs.
Specifically, a multi-scale context extraction module is adopted to address the scale variation of salient objects.
In order to accurately detect complete salient objects in complex backgrounds, we design an attention-based pyramid feature aggregation mechanism.
arXiv Detail & Related papers (2022-05-18T14:32:47Z) - RRNet: Relational Reasoning Network with Parallel Multi-scale Attention
for Salient Object Detection in Optical Remote Sensing Images [82.1679766706423]
Salient object detection (SOD) for optical remote sensing images (RSIs) aims at locating and extracting visually distinctive objects/regions from the optical RSIs.
We propose a relational reasoning network with parallel multi-scale attention for SOD in optical RSIs.
Our proposed RRNet outperforms the existing state-of-the-art SOD competitors both qualitatively and quantitatively.
arXiv Detail & Related papers (2021-10-27T07:18:32Z) - CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented
Object Detection in Remote Sensing Images [0.9462808515258465]
In this paper, we discuss the role of discriminative features in object detection.
We then propose a Critical Feature Capturing Network (CFC-Net) to improve detection accuracy.
We show that our method achieves superior detection performance compared with many state-of-the-art approaches.
arXiv Detail & Related papers (2021-01-18T02:31:09Z) - Dense Attention Fluid Network for Salient Object Detection in Optical
Remote Sensing Images [193.77450545067967]
We propose an end-to-end Dense Attention Fluid Network (DAFNet) for salient object detection in optical remote sensing images (RSIs)
A Global Context-aware Attention (GCA) module is proposed to adaptively capture long-range semantic context relationships.
We construct a new and challenging optical RSI dataset for SOD that contains 2,000 images with pixel-wise saliency annotations.
arXiv Detail & Related papers (2020-11-26T06:14:10Z) - A Parallel Down-Up Fusion Network for Salient Object Detection in
Optical Remote Sensing Images [82.87122287748791]
We propose a novel Parallel Down-up Fusion network (PDF-Net) for salient object detection in optical remote sensing images (RSIs)
It takes full advantage of the in-path low- and high-level features and cross-path multi-resolution features to distinguish diversely scaled salient objects and suppress the cluttered backgrounds.
Experiments on the ORSSD dataset demonstrate that the proposed network is superior to the state-of-the-art approaches both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-10-02T05:27:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.