Feature Transformation for Cross-domain Few-shot Remote Sensing Scene
Classification
- URL: http://arxiv.org/abs/2203.02270v1
- Date: Fri, 4 Mar 2022 12:42:03 GMT
- Title: Feature Transformation for Cross-domain Few-shot Remote Sensing Scene
Classification
- Authors: Qiaoling Chen, Zhihao Chen, Wei Luo
- Abstract summary: We propose the feature-wise transformation module (FTM) for remote sensing scene classification.
FTM transfers the feature distribution learned on source domain to that of target domain by a very simple affine operation.
Experiments on RSSC and land-cover mapping tasks verified its capability to handle cross-domain few-shot problems.
- Score: 7.0845385224286055
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Effectively classifying remote sensing scenes is still a challenge due to the
increasing spatial resolution of remote imaging and large variances between
remote sensing images. Existing research has greatly improved the performance
of remote sensing scene classification (RSSC). However, these methods are not
applicable to cross-domain few-shot problems where target domain is with very
limited training samples available and has a different data distribution from
source domain. To improve the model's applicability, we propose the
feature-wise transformation module (FTM) in this paper. FTM transfers the
feature distribution learned on source domain to that of target domain by a
very simple affine operation with negligible additional parameters. Moreover,
FTM can be effectively learned on target domain in the case of few training
data available and is agnostic to specific network structures. Experiments on
RSSC and land-cover mapping tasks verified its capability to handle
cross-domain few-shot problems. By comparison with directly finetuning, FTM
achieves better performance and possesses better transferability and
fine-grained discriminability. \textit{Code will be publicly available.}
Related papers
- S4DL: Shift-sensitive Spatial-Spectral Disentangling Learning for Hyperspectral Image Unsupervised Domain Adaptation [73.90209847296839]
Unsupervised domain adaptation techniques, extensively studied in hyperspectral image (HSI) classification, aim to use labeled source domain data and unlabeled target domain data.
We propose shift-sensitive spatial-spectral disentangling learning (S4DL) approach.
Experiments on several cross-scene HSI datasets consistently verified that S4DL is better than the state-of-the-art UDA methods.
arXiv Detail & Related papers (2024-08-11T15:58:24Z) - Adaptive Semantic Consistency for Cross-domain Few-shot Classification [27.176106714652327]
Cross-domain few-shot classification (CD-FSC) aims to identify novel target classes with a few samples.
We propose a simple plug-and-play Adaptive Semantic Consistency framework, which improves cross-domain robustness.
The proposed ASC enables explicit transfer of source domain knowledge to prevent the model from overfitting the target domain.
arXiv Detail & Related papers (2023-08-01T15:37:19Z) - Self-Training Guided Disentangled Adaptation for Cross-Domain Remote
Sensing Image Semantic Segmentation [20.07907723950031]
We propose a self-training guided disentangled adaptation network (ST-DASegNet) for cross-domain RS image semantic segmentation task.
We first propose source student backbone and target student backbone to respectively extract the source-style and target-style feature for both source and target images.
We then propose a domain disentangled module to extract the universal feature and purify the distinct feature of source-style and target-style features.
arXiv Detail & Related papers (2023-01-13T13:11:22Z) - Multi-Scale Multi-Target Domain Adaptation for Angle Closure
Classification [50.658613573816254]
We propose a novel Multi-scale Multi-target Domain Adversarial Network (M2DAN) for angle closure classification.
Based on these domain-invariant features at different scales, the deep model trained on the source domain is able to classify angle closure on multiple target domains.
arXiv Detail & Related papers (2022-08-25T15:27:55Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - Improving Transferability of Domain Adaptation Networks Through Domain
Alignment Layers [1.3766148734487902]
Multi-source unsupervised domain adaptation (MSDA) aims at learning a predictor for an unlabeled domain by assigning weak knowledge from a bag of source models.
We propose to embed Multi-Source version of DomaIn Alignment Layers (MS-DIAL) at different levels of the predictor.
Our approach can improve state-of-the-art MSDA methods, yielding relative gains of up to +30.64% on their classification accuracies.
arXiv Detail & Related papers (2021-09-06T18:41:19Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Multilevel Knowledge Transfer for Cross-Domain Object Detection [26.105283273950942]
Domain shift is a well known problem where a model trained on a particular domain (source) does not perform well when exposed to samples from a different domain (target)
In this work, we address the domain shift problem for the object detection task.
Our approach relies on gradually removing the domain shift between the source and the target domains.
arXiv Detail & Related papers (2021-08-02T15:24:40Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.