Self-supervised Contrastive Learning for Cross-domain Hyperspectral
Image Representation
- URL: http://arxiv.org/abs/2202.03968v1
- Date: Tue, 8 Feb 2022 16:16:45 GMT
- Title: Self-supervised Contrastive Learning for Cross-domain Hyperspectral
Image Representation
- Authors: Hyungtae Lee and Heesung Kwon
- Abstract summary: This paper introduces a self-supervised learning framework suitable for hyperspectral images that are inherently challenging to annotate.
The proposed framework architecture leverages cross-domain CNN, allowing for learning representations from different hyperspectral images.
The experimental results demonstrate the advantage of the proposed self-supervised representation over models trained from scratch or other transfer learning methods.
- Score: 26.610588734000316
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, self-supervised learning has attracted attention due to its
remarkable ability to acquire meaningful representations for classification
tasks without using semantic labels. This paper introduces a self-supervised
learning framework suitable for hyperspectral images that are inherently
challenging to annotate. The proposed framework architecture leverages
cross-domain CNN, allowing for learning representations from different
hyperspectral images with varying spectral characteristics and no pixel-level
annotation. In the framework, cross-domain representations are learned via
contrastive learning where neighboring spectral vectors in the same image are
clustered together in a common representation space encompassing multiple
hyperspectral images. In contrast, spectral vectors in different hyperspectral
images are separated into distinct clusters in the space. To verify that the
learned representation through contrastive learning is effectively transferred
into a downstream task, we perform a classification task on hyperspectral
images. The experimental results demonstrate the advantage of the proposed
self-supervised representation over models trained from scratch or other
transfer learning methods.
Related papers
- Unsupervised Feature Clustering Improves Contrastive Representation
Learning for Medical Image Segmentation [18.75543045234889]
Self-supervised instance discrimination is an effective contrastive pretext task to learn feature representations and address limited medical image annotations.
We propose a new self-supervised contrastive learning method that uses unsupervised feature clustering to better select positive and negative image samples.
Our method outperforms state-of-the-art self-supervised contrastive techniques on these tasks.
arXiv Detail & Related papers (2022-11-15T22:54:29Z) - LEAD: Self-Supervised Landmark Estimation by Aligning Distributions of
Feature Similarity [49.84167231111667]
Existing works in self-supervised landmark detection are based on learning dense (pixel-level) feature representations from an image.
We introduce an approach to enhance the learning of dense equivariant representations in a self-supervised fashion.
We show that having such a prior in the feature extractor helps in landmark detection, even under drastically limited number of annotations.
arXiv Detail & Related papers (2022-04-06T17:48:18Z) - Cross-View-Prediction: Exploring Contrastive Feature for Hyperspectral
Image Classification [9.131465469247608]
This paper presents a self-supervised feature learning method for hyperspectral image classification.
Our method tries to construct two different views of the raw hyperspectral image through a cross-representation learning method.
And then to learn semantically consistent representation over the created views by contrastive learning method.
arXiv Detail & Related papers (2022-03-14T11:07:33Z) - Learning Contrastive Representation for Semantic Correspondence [150.29135856909477]
We propose a multi-level contrastive learning approach for semantic matching.
We show that image-level contrastive learning is a key component to encourage the convolutional features to find correspondence between similar objects.
arXiv Detail & Related papers (2021-09-22T18:34:14Z) - Dense Semantic Contrast for Self-Supervised Visual Representation
Learning [12.636783522731392]
We present Dense Semantic Contrast (DSC) for modeling semantic category decision boundaries at a dense level.
We propose a dense cross-image semantic contrastive learning framework for multi-granularity representation learning.
Experimental results show that our DSC model outperforms state-of-the-art methods when transferring to downstream dense prediction tasks.
arXiv Detail & Related papers (2021-09-16T07:04:05Z) - Spatially Consistent Representation Learning [12.120041613482558]
We propose a spatially consistent representation learning algorithm (SCRL) for multi-object and location-specific tasks.
We devise a novel self-supervised objective that tries to produce coherent spatial representations of a randomly cropped local region.
On various downstream localization tasks with benchmark datasets, the proposed SCRL shows significant performance improvements.
arXiv Detail & Related papers (2021-03-10T15:23:45Z) - Learning to Compose Hypercolumns for Visual Correspondence [57.93635236871264]
We introduce a novel approach to visual correspondence that dynamically composes effective features by leveraging relevant layers conditioned on the images to match.
The proposed method, dubbed Dynamic Hyperpixel Flow, learns to compose hypercolumn features on the fly by selecting a small number of relevant layers from a deep convolutional neural network.
arXiv Detail & Related papers (2020-07-21T04:03:22Z) - Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation [128.03739769844736]
Two neural co-attentions are incorporated into the classifier to capture cross-image semantic similarities and differences.
In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference.
Our algorithm sets new state-of-the-arts on all these settings, demonstrating well its efficacy and generalizability.
arXiv Detail & Related papers (2020-07-03T21:53:46Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z) - Learning Representations by Predicting Bags of Visual Words [55.332200948110895]
Self-supervised representation learning targets to learn convnet-based image representations from unlabeled data.
Inspired by the success of NLP methods in this area, in this work we propose a self-supervised approach based on spatially dense image descriptions.
arXiv Detail & Related papers (2020-02-27T16:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.