SACANet: scene-aware class attention network for semantic segmentation
of remote sensing images
- URL: http://arxiv.org/abs/2304.11424v1
- Date: Sat, 22 Apr 2023 14:54:31 GMT
- Title: SACANet: scene-aware class attention network for semantic segmentation
of remote sensing images
- Authors: Xiaowen Ma, Rui Che, Tingfeng Hong, Mengting Ma, Ziyan Zhao, Tian Feng
and Wei Zhang
- Abstract summary: We propose a scene-aware class attention network (SACANet) for semantic segmentation of remote sensing images.
Experimental results on three datasets show that SACANet outperforms other state-of-the-art methods and validate its effectiveness.
- Score: 4.124381172041927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatial attention mechanism has been widely used in semantic segmentation of
remote sensing images given its capability to model long-range dependencies.
Many methods adopting spatial attention mechanism aggregate contextual
information using direct relationships between pixels within an image, while
ignoring the scene awareness of pixels (i.e., being aware of the global context
of the scene where the pixels are located and perceiving their relative
positions). Given the observation that scene awareness benefits context
modeling with spatial correlations of ground objects, we design a scene-aware
attention module based on a refined spatial attention mechanism embedding scene
awareness. Besides, we present a local-global class attention mechanism to
address the problem that general attention mechanism introduces excessive
background noises while hardly considering the large intra-class variance in
remote sensing images. In this paper, we integrate both scene-aware and class
attentions to propose a scene-aware class attention network (SACANet) for
semantic segmentation of remote sensing images. Experimental results on three
datasets show that SACANet outperforms other state-of-the-art methods and
validate its effectiveness. Code is available at
https://github.com/xwmaxwma/rssegmentation.
Related papers
- Spotlight Attention: Robust Object-Centric Learning With a Spatial
Locality Prior [88.9319150230121]
Object-centric vision aims to construct an explicit representation of the objects in a scene.
We incorporate a spatial-locality prior into state-of-the-art object-centric vision models.
We obtain significant improvements in segmenting objects in both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-31T04:35:50Z) - Local-Aware Global Attention Network for Person Re-Identification Based on Body and Hand Images [0.0]
We propose a compound approach for end-to-end discriminative deep feature learning for person Re-Id based on both body and hand images.
The proposed method consistently outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2022-09-11T09:43:42Z) - Bi-directional Object-context Prioritization Learning for Saliency
Ranking [60.62461793691836]
Existing approaches focus on learning either object-object or object-scene relations.
We observe that spatial attention works concurrently with object-based attention in the human visual recognition system.
We propose a novel bi-directional method to unify spatial attention and object-based attention for saliency ranking.
arXiv Detail & Related papers (2022-03-17T16:16:03Z) - Learning to ignore: rethinking attention in CNNs [87.01305532842878]
We propose to reformulate the attention mechanism in CNNs to learn to ignore instead of learning to attend.
Specifically, we propose to explicitly learn irrelevant information in the scene and suppress it in the produced representation.
arXiv Detail & Related papers (2021-11-10T13:47:37Z) - Implicit and Explicit Attention for Zero-Shot Learning [11.66422653137002]
We propose implicit and explicit attention mechanisms to address the bias problem in Zero-Shot Learning (ZSL) models.
We conduct comprehensive experiments on three popular benchmarks: AWA2, CUB and SUN.
arXiv Detail & Related papers (2021-10-02T18:06:21Z) - Instance-aware Remote Sensing Image Captioning with Cross-hierarchy
Attention [11.23821696220285]
spatial attention is a straightforward approach to enhance the performance for remote sensing image captioning.
We propose a remote sensing image caption generator with instance-awareness and cross-hierarchy attention.
arXiv Detail & Related papers (2021-05-11T12:59:07Z) - Rethinking of the Image Salient Object Detection: Object-level Semantic
Saliency Re-ranking First, Pixel-wise Saliency Refinement Latter [62.26677215668959]
We propose a lightweight, weakly supervised deep network to coarsely locate semantically salient regions.
We then fuse multiple off-the-shelf deep models on these semantically salient regions as the pixel-wise saliency refinement.
Our method is simple yet effective, which is the first attempt to consider the salient object detection mainly as an object-level semantic re-ranking problem.
arXiv Detail & Related papers (2020-08-10T07:12:43Z) - Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation [128.03739769844736]
Two neural co-attentions are incorporated into the classifier to capture cross-image semantic similarities and differences.
In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference.
Our algorithm sets new state-of-the-arts on all these settings, demonstrating well its efficacy and generalizability.
arXiv Detail & Related papers (2020-07-03T21:53:46Z) - Remote Sensing Image Scene Classification Meets Deep Learning:
Challenges, Methods, Benchmarks, and Opportunities [81.29441139530844]
This paper provides a systematic survey of deep learning methods for remote sensing image scene classification by covering more than 160 papers.
We discuss the main challenges of remote sensing image scene classification and survey.
We introduce the benchmarks used for remote sensing image scene classification and summarize the performance of more than two dozen representative algorithms.
arXiv Detail & Related papers (2020-05-03T14:18:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.