Dual Classification Head Self-training Network for Cross-scene Hyperspectral Image Classification
- URL: http://arxiv.org/abs/2502.17879v1
- Date: Tue, 25 Feb 2025 06:07:46 GMT
- Title: Dual Classification Head Self-training Network for Cross-scene Hyperspectral Image Classification
- Authors: Rong Liu, Junye Liang, Jiaqi Yang, Jiang He, Peng Zhu,
- Abstract summary: Cross-scene classification has emerged as a widely adopted approach in the remote sensing community.<n> variations in the reflectance spectrum of the same object between the SD and the TD, as well as differences in the feature distribution of the same land cover class, pose significant challenges to the performance of cross-scene classification.<n>We introduce a dual classification head self-training strategy for the first time in the cross-scene HSI classification field.
- Score: 7.482363472369229
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the difficulty of obtaining labeled data for hyperspectral images (HSIs), cross-scene classification has emerged as a widely adopted approach in the remote sensing community. It involves training a model using labeled data from a source domain (SD) and unlabeled data from a target domain (TD), followed by inferencing on the TD. However, variations in the reflectance spectrum of the same object between the SD and the TD, as well as differences in the feature distribution of the same land cover class, pose significant challenges to the performance of cross-scene classification. To address this issue, we propose a dual classification head self-training network (DHSNet). This method aligns class-wise features across domains, ensuring that the trained classifier can accurately classify TD data of different classes. We introduce a dual classification head self-training strategy for the first time in the cross-scene HSI classification field. The proposed approach mitigates domain gap while preventing the accumulation of incorrect pseudo-labels in the model. Additionally, we incorporate a novel central feature attention mechanism to enhance the model's capacity to learn scene-invariant features across domains. Experimental results on three cross-scene HSI datasets demonstrate that the proposed DHSNET significantly outperforms other state-of-the-art approaches. The code for DHSNet will be available at https://github.com/liurongwhm.
Related papers
- EIANet: A Novel Domain Adaptation Approach to Maximize Class Distinction with Neural Collapse Principles [15.19374752514876]
Source-free domain adaptation (SFDA) aims to transfer knowledge from a labelled source domain to an unlabelled target domain.
A major challenge in SFDA is deriving accurate categorical information for the target domain.
We introduce a novel ETF-Informed Attention Network (EIANet) to separate class prototypes.
arXiv Detail & Related papers (2024-07-23T05:31:05Z) - CDAD-Net: Bridging Domain Gaps in Generalized Category Discovery [9.505699498746976]
Generalized Category Discovery (GCD) is a tool to cluster unlabeled samples of known and novel classes.
We present Across Domain Generalized Category Discovery (AD-GCD) and bring forth CDAD-NET as a remedy.
CDAD-NET is architected to synchronize potential known class samples across both the labeled (source) and unlabeled (target) datasets.
Experimentally, CDAD-NET eclipses existing literature with a performance increment of 8-15% on three AD-GCD benchmarks we present.
arXiv Detail & Related papers (2024-04-08T10:05:24Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - Cross-Domain Few-Shot Classification via Inter-Source Stylization [11.008292768447614]
Cross-Domain Few-Shot Classification (CDFSC) is to accurately classify a target dataset with limited labelled data.
This paper proposes a solution that makes use of multiple source domains without the need for additional labeling costs.
arXiv Detail & Related papers (2022-08-17T01:44:32Z) - Distribution Regularized Self-Supervised Learning for Domain Adaptation
of Semantic Segmentation [3.284878354988896]
This paper proposes a pixel-level distribution regularization scheme (DRSL) for self-supervised domain adaptation of semantic segmentation.
In a typical setting, the classification loss forces the semantic segmentation model to greedily learn the representations that capture inter-class variations.
We capture pixel-level intra-class variations through class-aware multi-modal distribution learning.
arXiv Detail & Related papers (2022-06-20T09:52:49Z) - Cycle Label-Consistent Networks for Unsupervised Domain Adaptation [57.29464116557734]
Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
arXiv Detail & Related papers (2022-05-27T13:09:08Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Joint Visual and Temporal Consistency for Unsupervised Domain Adaptive
Person Re-Identification [64.37745443119942]
This paper jointly enforces visual and temporal consistency in the combination of a local one-hot classification and a global multi-class classification.
Experimental results on three large-scale ReID datasets demonstrate the superiority of proposed method in both unsupervised and unsupervised domain adaptive ReID tasks.
arXiv Detail & Related papers (2020-07-21T14:31:27Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.