Evaluating the Label Efficiency of Contrastive Self-Supervised Learning
for Multi-Resolution Satellite Imagery
- URL: http://arxiv.org/abs/2210.06786v1
- Date: Thu, 13 Oct 2022 06:54:13 GMT
- Title: Evaluating the Label Efficiency of Contrastive Self-Supervised Learning
for Multi-Resolution Satellite Imagery
- Authors: Jules BOURCIER (Thoth), Gohar Dashyan, Jocelyn Chanussot (Thoth),
Karteek Alahari (Thoth)
- Abstract summary: Self-supervised learning has been applied in the remote sensing domain to exploit readily-available unlabeled data.
In this paper, we study self-supervised visual representation learning through the lens of label efficiency.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The application of deep neural networks to remote sensing imagery is often
constrained by the lack of ground-truth annotations. Adressing this issue
requires models that generalize efficiently from limited amounts of labeled
data, allowing us to tackle a wider range of Earth observation tasks. Another
challenge in this domain is developing algorithms that operate at variable
spatial resolutions, e.g., for the problem of classifying land use at different
scales. Recently, self-supervised learning has been applied in the remote
sensing domain to exploit readily-available unlabeled data, and was shown to
reduce or even close the gap with supervised learning. In this paper, we study
self-supervised visual representation learning through the lens of label
efficiency, for the task of land use classification on
multi-resolution/multi-scale satellite images. We benchmark two contrastive
self-supervised methods adapted from Momentum Contrast (MoCo) and provide
evidence that these methods can be perform effectively given little downstream
supervision, where randomly initialized networks fail to generalize. Moreover,
they outperform out-of-domain pretraining alternatives. We use the large-scale
fMoW dataset to pretrain and evaluate the networks, and validate our
observations with transfer to the RESISC45 dataset.
Related papers
- Terrain-Informed Self-Supervised Learning: Enhancing Building Footprint Extraction from LiDAR Data with Limited Annotations [1.3243401820948064]
Building footprint maps offer promise of precise footprint extraction without extensive post-processing.
Deep learning methods face challenges in generalization and label efficiency.
We propose terrain-aware self-supervised learning tailored to remote sensing.
arXiv Detail & Related papers (2023-11-02T12:34:23Z) - In-Domain Self-Supervised Learning Improves Remote Sensing Image Scene
Classification [5.323049242720532]
Self-supervised learning has emerged as a promising approach for remote sensing image classification.
We present a study of different self-supervised pre-training strategies and evaluate their effect across 14 downstream datasets.
arXiv Detail & Related papers (2023-07-04T10:57:52Z) - Domain Adaptable Self-supervised Representation Learning on Remote
Sensing Satellite Imagery [2.796274924103132]
This work presents a novel domain paradigm for studying contrastive self-supervised representation learning and knowledge transfer using remote sensing satellite data.
The proposed approach investigates the knowledge transfer of selfsupervised representations across the distinct source and target data distributions.
Experiments are conducted on three publicly available datasets, UC Merced Landuse (UCMD), SIRI-WHU, and MLRSNet.
arXiv Detail & Related papers (2023-04-19T14:32:36Z) - Deep face recognition with clustering based domain adaptation [57.29464116557734]
We propose a new clustering-based domain adaptation method designed for face recognition task in which the source and target domain do not share any classes.
Our method effectively learns the discriminative target feature by aligning the feature domain globally, and, at the meantime, distinguishing the target clusters locally.
arXiv Detail & Related papers (2022-05-27T12:29:11Z) - Clustering augmented Self-Supervised Learning: Anapplication to Land
Cover Mapping [10.720852987343896]
We introduce a new method for land cover mapping by using a clustering based pretext task for self-supervised learning.
We demonstrate the effectiveness of the method on two societally relevant applications.
arXiv Detail & Related papers (2021-08-16T19:35:43Z) - Coarse to Fine: Domain Adaptive Crowd Counting via Adversarial Scoring
Network [58.05473757538834]
This paper proposes a novel adversarial scoring network (ASNet) to bridge the gap across domains from coarse to fine granularity.
Three sets of migration experiments show that the proposed methods achieve state-of-the-art counting performance.
arXiv Detail & Related papers (2021-07-27T14:47:24Z) - Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote
Sensing Data [64.40187171234838]
Seasonal Contrast (SeCo) is an effective pipeline to leverage unlabeled data for in-domain pre-training of re-mote sensing representations.
SeCo will be made public to facilitate transfer learning and enable rapid progress in re-mote sensing applications.
arXiv Detail & Related papers (2021-03-30T18:26:39Z) - Learning a Domain-Agnostic Visual Representation for Autonomous Driving
via Contrastive Loss [25.798361683744684]
Domain-Agnostic Contrastive Learning (DACL) is a two-stage unsupervised domain adaptation framework with cyclic adversarial training and contrastive loss.
Our proposed approach achieves better performance in the monocular depth estimation task compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-10T07:06:03Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Laplacian Denoising Autoencoder [114.21219514831343]
We propose to learn data representations with a novel type of denoising autoencoder.
The noisy input data is generated by corrupting latent clean data in the gradient domain.
Experiments on several visual benchmarks demonstrate that better representations can be learned with the proposed approach.
arXiv Detail & Related papers (2020-03-30T16:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.