Domain Adaptable Self-supervised Representation Learning on Remote
Sensing Satellite Imagery
- URL: http://arxiv.org/abs/2304.09874v1
- Date: Wed, 19 Apr 2023 14:32:36 GMT
- Title: Domain Adaptable Self-supervised Representation Learning on Remote
Sensing Satellite Imagery
- Authors: Muskaan Chopra, Prakash Chandra Chhipa, Gopal Mengi, Varun Gupta and
Marcus Liwicki
- Abstract summary: This work presents a novel domain paradigm for studying contrastive self-supervised representation learning and knowledge transfer using remote sensing satellite data.
The proposed approach investigates the knowledge transfer of selfsupervised representations across the distinct source and target data distributions.
Experiments are conducted on three publicly available datasets, UC Merced Landuse (UCMD), SIRI-WHU, and MLRSNet.
- Score: 2.796274924103132
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work presents a novel domain adaption paradigm for studying contrastive
self-supervised representation learning and knowledge transfer using remote
sensing satellite data. Major state-of-the-art remote sensing visual domain
efforts primarily focus on fully supervised learning approaches that rely
entirely on human annotations. On the other hand, human annotations in remote
sensing satellite imagery are always subject to limited quantity due to high
costs and domain expertise, making transfer learning a viable alternative. The
proposed approach investigates the knowledge transfer of selfsupervised
representations across the distinct source and target data distributions in
depth in the remote sensing data domain. In this arrangement, self-supervised
contrastive learning-based pretraining is performed on the source dataset, and
downstream tasks are performed on the target datasets in a round-robin fashion.
Experiments are conducted on three publicly available datasets, UC Merced
Landuse (UCMD), SIRI-WHU, and MLRSNet, for different downstream classification
tasks versus label efficiency. In self-supervised knowledge transfer, the
proposed approach achieves state-of-the-art performance with label efficiency
labels and outperforms a fully supervised setting. A more in-depth qualitative
examination reveals consistent evidence for explainable representation
learning. The source code and trained models are published on GitHub.
Related papers
- SiamSeg: Self-Training with Contrastive Learning for Unsupervised Domain Adaptation Semantic Segmentation in Remote Sensing [14.007392647145448]
UDA enables models to learn from unlabeled target domain data while training on labeled source domain data.
We propose integrating contrastive learning into UDA, enhancing the model's capacity to capture semantic information.
Our SimSeg method outperforms existing approaches, achieving state-of-the-art results.
arXiv Detail & Related papers (2024-10-17T11:59:39Z) - Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - CDFSL-V: Cross-Domain Few-Shot Learning for Videos [58.37446811360741]
Few-shot video action recognition is an effective approach to recognizing new categories with only a few labeled examples.
Existing methods in video action recognition rely on large labeled datasets from the same domain.
We propose a novel cross-domain few-shot video action recognition method that leverages self-supervised learning and curriculum learning.
arXiv Detail & Related papers (2023-09-07T19:44:27Z) - Weakly-supervised Contrastive Learning for Unsupervised Object Discovery [52.696041556640516]
Unsupervised object discovery is promising due to its ability to discover objects in a generic manner.
We design a semantic-guided self-supervised learning model to extract high-level semantic features from images.
We introduce Principal Component Analysis (PCA) to localize object regions.
arXiv Detail & Related papers (2023-07-07T04:03:48Z) - In-Domain Self-Supervised Learning Improves Remote Sensing Image Scene
Classification [5.323049242720532]
Self-supervised learning has emerged as a promising approach for remote sensing image classification.
We present a study of different self-supervised pre-training strategies and evaluate their effect across 14 downstream datasets.
arXiv Detail & Related papers (2023-07-04T10:57:52Z) - Extending global-local view alignment for self-supervised learning with remote sensing imagery [1.5192294544599656]
Self-supervised models acquire general feature representations by formulating a pretext task that generates pseudo-labels for massive unlabeled data.
Inspired by DINO, we formulate two pretext tasks for self-supervised learning on remote sensing imagery (SSLRS)
We extend DINO and propose DINO-MC which uses local views of various sized crops instead of a single fixed size.
arXiv Detail & Related papers (2023-03-12T14:24:10Z) - Evaluating the Label Efficiency of Contrastive Self-Supervised Learning
for Multi-Resolution Satellite Imagery [0.0]
Self-supervised learning has been applied in the remote sensing domain to exploit readily-available unlabeled data.
In this paper, we study self-supervised visual representation learning through the lens of label efficiency.
arXiv Detail & Related papers (2022-10-13T06:54:13Z) - Clustering augmented Self-Supervised Learning: Anapplication to Land
Cover Mapping [10.720852987343896]
We introduce a new method for land cover mapping by using a clustering based pretext task for self-supervised learning.
We demonstrate the effectiveness of the method on two societally relevant applications.
arXiv Detail & Related papers (2021-08-16T19:35:43Z) - Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote
Sensing Data [64.40187171234838]
Seasonal Contrast (SeCo) is an effective pipeline to leverage unlabeled data for in-domain pre-training of re-mote sensing representations.
SeCo will be made public to facilitate transfer learning and enable rapid progress in re-mote sensing applications.
arXiv Detail & Related papers (2021-03-30T18:26:39Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.