Cross-View-Prediction: Exploring Contrastive Feature for Hyperspectral
Image Classification
- URL: http://arxiv.org/abs/2203.07000v1
- Date: Mon, 14 Mar 2022 11:07:33 GMT
- Title: Cross-View-Prediction: Exploring Contrastive Feature for Hyperspectral
Image Classification
- Authors: Haotian Wu, Anyu Zhang and Zeyu Cao
- Abstract summary: This paper presents a self-supervised feature learning method for hyperspectral image classification.
Our method tries to construct two different views of the raw hyperspectral image through a cross-representation learning method.
And then to learn semantically consistent representation over the created views by contrastive learning method.
- Score: 9.131465469247608
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a self-supervised feature learning method for
hyperspectral image classification. Our method tries to construct two different
views of the raw hyperspectral image through a cross-representation learning
method. And then to learn semantically consistent representation over the
created views by contrastive learning method. Specifically, four
cross-channel-prediction based augmentation methods are naturally designed to
utilize the high dimension characteristic of hyperspectral data for the view
construction. And the better representative features are learned by maximizing
mutual information and minimizing conditional entropy across different views
from our contrastive network. This 'Cross-View-Predicton' style is
straightforward and gets the state-of-the-art performance of unsupervised
classification with a simple SVM classifier.
Related papers
- LEAD: Self-Supervised Landmark Estimation by Aligning Distributions of
Feature Similarity [49.84167231111667]
Existing works in self-supervised landmark detection are based on learning dense (pixel-level) feature representations from an image.
We introduce an approach to enhance the learning of dense equivariant representations in a self-supervised fashion.
We show that having such a prior in the feature extractor helps in landmark detection, even under drastically limited number of annotations.
arXiv Detail & Related papers (2022-04-06T17:48:18Z) - Self-supervised Contrastive Learning for Cross-domain Hyperspectral
Image Representation [26.610588734000316]
This paper introduces a self-supervised learning framework suitable for hyperspectral images that are inherently challenging to annotate.
The proposed framework architecture leverages cross-domain CNN, allowing for learning representations from different hyperspectral images.
The experimental results demonstrate the advantage of the proposed self-supervised representation over models trained from scratch or other transfer learning methods.
arXiv Detail & Related papers (2022-02-08T16:16:45Z) - Learning Contrastive Representation for Semantic Correspondence [150.29135856909477]
We propose a multi-level contrastive learning approach for semantic matching.
We show that image-level contrastive learning is a key component to encourage the convolutional features to find correspondence between similar objects.
arXiv Detail & Related papers (2021-09-22T18:34:14Z) - Maximizing Mutual Information Across Feature and Topology Views for
Learning Graph Representations [25.756202627564505]
We propose a novel approach by exploiting mutual information across feature and topology views.
Our proposed method can achieve comparable or even better performance under the unsupervised representation and linear evaluation protocol.
arXiv Detail & Related papers (2021-05-14T08:49:40Z) - Saliency-driven Class Impressions for Feature Visualization of Deep
Neural Networks [55.11806035788036]
It is advantageous to visualize the features considered to be essential for classification.
Existing visualization methods develop high confidence images consisting of both background and foreground features.
In this work, we propose a saliency-driven approach to visualize discriminative features that are considered most important for a given task.
arXiv Detail & Related papers (2020-07-31T06:11:06Z) - Learning to Compose Hypercolumns for Visual Correspondence [57.93635236871264]
We introduce a novel approach to visual correspondence that dynamically composes effective features by leveraging relevant layers conditioned on the images to match.
The proposed method, dubbed Dynamic Hyperpixel Flow, learns to compose hypercolumn features on the fly by selecting a small number of relevant layers from a deep convolutional neural network.
arXiv Detail & Related papers (2020-07-21T04:03:22Z) - Unsupervised Learning of Visual Features by Contrasting Cluster
Assignments [57.33699905852397]
We propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons.
Our method simultaneously clusters the data while enforcing consistency between cluster assignments.
Our method can be trained with large and small batches and can scale to unlimited amounts of data.
arXiv Detail & Related papers (2020-06-17T14:00:42Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.