Self Supervised Learning for Few Shot Hyperspectral Image Classification
- URL: http://arxiv.org/abs/2206.12117v1
- Date: Fri, 24 Jun 2022 07:21:53 GMT
- Title: Self Supervised Learning for Few Shot Hyperspectral Image Classification
- Authors: Nassim Ait Ali Braham, Lichao Mou, Jocelyn Chanussot, Julien Mairal,
Xiao Xiang Zhu
- Abstract summary: We propose to leverage Self Supervised Learning (SSL) for HSI classification.
We show that by pre-training an encoder on unlabeled pixels using Barlow-Twins, a state-of-the-art SSL algorithm, we can obtain accurate models with a handful of labels.
- Score: 57.2348804884321
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has proven to be a very effective approach for Hyperspectral
Image (HSI) classification. However, deep neural networks require large
annotated datasets to generalize well. This limits the applicability of deep
learning for HSI classification, where manually labelling thousands of pixels
for every scene is impractical. In this paper, we propose to leverage Self
Supervised Learning (SSL) for HSI classification. We show that by pre-training
an encoder on unlabeled pixels using Barlow-Twins, a state-of-the-art SSL
algorithm, we can obtain accurate models with a handful of labels. Experimental
results demonstrate that this approach significantly outperforms vanilla
supervised learning.
Related papers
- Deep Active Learning Using Barlow Twins [0.0]
The generalisation performance of a convolutional neural networks (CNN) is majorly predisposed by the quantity, quality, and diversity of the training images.
The goal of the Active learning for the task is to draw most informative samples from the unlabeled pool.
We propose Deep Active Learning using BarlowTwins(DALBT), an active learning method for all the datasets.
arXiv Detail & Related papers (2022-12-30T12:39:55Z) - Self-Supervised PPG Representation Learning Shows High Inter-Subject
Variability [3.8036939971290007]
We propose a Self-Supervised Learning (SSL) method with a pretext task of signal reconstruction to learn an informative generalized PPG representation.
Results show that in a very limited label data setting (10 samples per class or less), using SSL is beneficial.
SSL may pave the way for the broader use of machine learning models on PPG data in label-scarce regimes.
arXiv Detail & Related papers (2022-12-07T19:02:45Z) - Can semi-supervised learning reduce the amount of manual labelling
required for effective radio galaxy morphology classification? [0.0]
We test whether SSL can achieve performance comparable to the current supervised state of the art when using many fewer labelled data points.
We find that although SSL provides additional regularisation, its performance degrades rapidly when using very few labels.
arXiv Detail & Related papers (2021-11-08T09:36:48Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Deep Active Learning for Joint Classification & Segmentation with Weak
Annotator [22.271760669551817]
CNN visualization and interpretation methods, like class-activation maps (CAMs), are typically used to highlight the image regions linked to class predictions.
We propose an active learning framework, which progressively integrates pixel-level annotations during training.
Our results indicate that, by simply using random sample selection, the proposed approach can significantly outperform state-of-the-art CAMs and AL methods.
arXiv Detail & Related papers (2020-10-10T03:25:54Z) - Remote Sensing Image Scene Classification with Self-Supervised Paradigm
under Limited Labeled Samples [11.025191332244919]
We introduce new self-supervised learning (SSL) mechanism to obtain the high-performance pre-training model for RSIs scene classification from large unlabeled data.
Experiments on three commonly used RSIs scene classification datasets demonstrated that this new learning paradigm outperforms the traditional dominant ImageNet pre-trained model.
The insights distilled from our studies can help to foster the development of SSL in the remote sensing community.
arXiv Detail & Related papers (2020-10-02T09:27:19Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - SCAN: Learning to Classify Images without Labels [73.69513783788622]
We advocate a two-step approach where feature learning and clustering are decoupled.
A self-supervised task from representation learning is employed to obtain semantically meaningful features.
We obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime.
arXiv Detail & Related papers (2020-05-25T18:12:33Z) - AdarGCN: Adaptive Aggregation GCN for Few-Shot Learning [112.95742995816367]
We propose a new few-shot fewshot learning setting termed FSFSL.
Under FSFSL, both the source and target classes have limited training samples.
We also propose a graph convolutional network (GCN)-based label denoising (LDN) method to remove irrelevant images.
arXiv Detail & Related papers (2020-02-28T10:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.