Dual-branch PolSAR Image Classification Based on GraphMAE and Local Feature Extraction
- URL: http://arxiv.org/abs/2408.04294v1
- Date: Thu, 8 Aug 2024 08:17:50 GMT
- Title: Dual-branch PolSAR Image Classification Based on GraphMAE and Local Feature Extraction
- Authors: Yuchen Wang, Ziyi Guo, Haixia Bi, Danfeng Hong, Chen Xu,
- Abstract summary: We propose a dual-branch classification model based on generative self-supervised learning in this paper.
The first branch is a superpixel-branch, which learns superpixel-level polarimetric representations using a generative self-supervised graph masked autoencoder.
To acquire finer classification results, a convolutional neural networks-based pixel-branch is further incorporated to learn pixel-level features.
- Score: 22.39266854681996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The annotation of polarimetric synthetic aperture radar (PolSAR) images is a labor-intensive and time-consuming process. Therefore, classifying PolSAR images with limited labels is a challenging task in remote sensing domain. In recent years, self-supervised learning approaches have proven effective in PolSAR image classification with sparse labels. However, we observe a lack of research on generative selfsupervised learning in the studied task. Motivated by this, we propose a dual-branch classification model based on generative self-supervised learning in this paper. The first branch is a superpixel-branch, which learns superpixel-level polarimetric representations using a generative self-supervised graph masked autoencoder. To acquire finer classification results, a convolutional neural networks-based pixel-branch is further incorporated to learn pixel-level features. Classification with fused dual-branch features is finally performed to obtain the predictions. Experimental results on the benchmark Flevoland dataset demonstrate that our approach yields promising classification results.
Related papers
- Multilayer deep feature extraction for visual texture recognition [0.0]
This paper is focused on improving the accuracy of convolutional neural networks in texture classification.
It is done by extracting features from multiple convolutional layers of a pretrained neural network and aggregating such features using Fisher vector.
We verify the effectiveness of our method on texture classification of benchmark datasets, as well as on a practical task of Brazilian plant species identification.
arXiv Detail & Related papers (2022-08-22T03:53:43Z) - LEAD: Self-Supervised Landmark Estimation by Aligning Distributions of
Feature Similarity [49.84167231111667]
Existing works in self-supervised landmark detection are based on learning dense (pixel-level) feature representations from an image.
We introduce an approach to enhance the learning of dense equivariant representations in a self-supervised fashion.
We show that having such a prior in the feature extractor helps in landmark detection, even under drastically limited number of annotations.
arXiv Detail & Related papers (2022-04-06T17:48:18Z) - Learning Hierarchical Graph Representation for Image Manipulation
Detection [50.04902159383709]
The objective of image manipulation detection is to identify and locate the manipulated regions in the images.
Recent approaches mostly adopt the sophisticated Convolutional Neural Networks (CNNs) to capture the tampering artifacts left in the images.
We propose a hierarchical Graph Convolutional Network (HGCN-Net), which consists of two parallel branches.
arXiv Detail & Related papers (2022-01-15T01:54:25Z) - A Contrastive Learning Approach to Auroral Identification and
Classification [0.8399688944263843]
We present a novel application of unsupervised learning to the task of auroral image classification.
We modify and adapt the Simple framework for Contrastive Learning of Representations (SimCLR) algorithm to learn representations of auroral images.
Our approach exceeds an established threshold for operational purposes, demonstrating readiness for deployment and utilization.
arXiv Detail & Related papers (2021-09-28T17:51:25Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Convolutional Neural Networks from Image Markers [62.997667081978825]
Feature Learning from Image Markers (FLIM) was recently proposed to estimate convolutional filters, with no backpropagation, from strokes drawn by a user on very few images.
This paper extends FLIM for fully connected layers and demonstrates it on different image classification problems.
The results show that FLIM-based convolutional neural networks can outperform the same architecture trained from scratch by backpropagation.
arXiv Detail & Related papers (2020-12-15T22:58:23Z) - Attention Model Enhanced Network for Classification of Breast Cancer
Image [54.83246945407568]
AMEN is formulated in a multi-branch fashion with pixel-wised attention model and classification submodular.
To focus more on subtle detail information, the sample image is enhanced by the pixel-wised attention map generated from former branch.
Experiments conducted on three benchmark datasets demonstrate the superiority of the proposed method under various scenarios.
arXiv Detail & Related papers (2020-10-07T08:44:21Z) - PolSAR Image Classification Based on Robust Low-Rank Feature Extraction
and Markov Random Field [44.59934840513234]
We present a novel PolSAR image classification method, which removes speckle noise via low-rank (LR) feature extraction and enforces smoothness priors via Markov random field (MRF)
Experimental results indicate that the proposed method achieves promising classification performance and preferable spatial consistency.
arXiv Detail & Related papers (2020-09-13T07:38:12Z) - Active Ensemble Deep Learning for Polarimetric Synthetic Aperture Radar
Image Classification [10.80252725670625]
In this letter, we take the advantage of active learning and propose active ensemble deep learning (AEDL) for PolSAR image classification.
We show that only 35% of the predicted labels of a deep learning model's snapshots near its convergence were exactly the same.
Using the snapshots committee to give out the informativeness of unlabeled data, the proposed AEDL achieved better performance on two real PolSAR images compared with standard active learning strategies.
arXiv Detail & Related papers (2020-06-29T01:40:54Z) - SCAN: Learning to Classify Images without Labels [73.69513783788622]
We advocate a two-step approach where feature learning and clustering are decoupled.
A self-supervised task from representation learning is employed to obtain semantically meaningful features.
We obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime.
arXiv Detail & Related papers (2020-05-25T18:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.