Joint multi-modal Self-Supervised pre-training in Remote Sensing:
Application to Methane Source Classification
- URL: http://arxiv.org/abs/2306.09851v1
- Date: Fri, 16 Jun 2023 14:01:57 GMT
- Title: Joint multi-modal Self-Supervised pre-training in Remote Sensing:
Application to Methane Source Classification
- Authors: Paul Berg, Minh-Tan Pham, and Nicolas Courty
- Abstract summary: In earth observation, there are opportunities to exploit domain-specific remote sensing image data.
We briefly review the core principles behind so-called joint-embeddings methods and investigate the usage of multiple remote sensing modalities in self-supervised pre-training.
- Score: 10.952006057356714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the current ubiquity of deep learning methods to solve computer vision
and remote sensing specific tasks, the need for labelled data is growing
constantly. However, in many cases, the annotation process can be long and
tedious depending on the expertise needed to perform reliable annotations. In
order to alleviate this need for annotations, several self-supervised methods
have recently been proposed in the literature. The core principle behind these
methods is to learn an image encoder using solely unlabelled data samples. In
earth observation, there are opportunities to exploit domain-specific remote
sensing image data in order to improve these methods. Specifically, by
leveraging the geographical position associated with each image, it is possible
to cross reference a location captured from multiple sensors, leading to
multiple views of the same locations. In this paper, we briefly review the core
principles behind so-called joint-embeddings methods and investigate the usage
of multiple remote sensing modalities in self-supervised pre-training. We
evaluate the final performance of the resulting encoders on the task of methane
source classification.
Related papers
- Deep Homography Estimation for Visual Place Recognition [49.235432979736395]
We propose a transformer-based deep homography estimation (DHE) network.
It takes the dense feature map extracted by a backbone network as input and fits homography for fast and learnable geometric verification.
Experiments on benchmark datasets show that our method can outperform several state-of-the-art methods.
arXiv Detail & Related papers (2024-02-25T13:22:17Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Evaluating the Label Efficiency of Contrastive Self-Supervised Learning
for Multi-Resolution Satellite Imagery [0.0]
Self-supervised learning has been applied in the remote sensing domain to exploit readily-available unlabeled data.
In this paper, we study self-supervised visual representation learning through the lens of label efficiency.
arXiv Detail & Related papers (2022-10-13T06:54:13Z) - Semantic Segmentation of Vegetation in Remote Sensing Imagery Using Deep
Learning [77.34726150561087]
We propose an approach for creating a multi-modal and large-temporal dataset comprised of publicly available Remote Sensing data.
We use Convolutional Neural Networks (CNN) models that are capable of separating different classes of vegetation.
arXiv Detail & Related papers (2022-09-28T18:51:59Z) - Clustering augmented Self-Supervised Learning: Anapplication to Land
Cover Mapping [10.720852987343896]
We introduce a new method for land cover mapping by using a clustering based pretext task for self-supervised learning.
We demonstrate the effectiveness of the method on two societally relevant applications.
arXiv Detail & Related papers (2021-08-16T19:35:43Z) - Representation Learning for Remote Sensing: An Unsupervised Sensor
Fusion Approach [0.0]
We propose Contrastive Sensor Fusion, which exploits coterminous data from multiple sources to learn useful representations of every possible combination of those sources.
Using a dataset of 47 million unlabeled coterminous image triplets, we train an encoder to produce meaningful representations from any possible combination of channels from the input sensors.
These representations outperform fully supervised ImageNet weights on a remote sensing classification task and improve as more sensors are fused.
arXiv Detail & Related papers (2021-08-11T08:32:58Z) - Self-supervised Audiovisual Representation Learning for Remote Sensing Data [96.23611272637943]
We propose a self-supervised approach for pre-training deep neural networks in remote sensing.
By exploiting the correspondence between geo-tagged audio recordings and remote sensing, this is done in a completely label-free manner.
We show that our approach outperforms existing pre-training strategies for remote sensing imagery.
arXiv Detail & Related papers (2021-08-02T07:50:50Z) - Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote
Sensing Data [64.40187171234838]
Seasonal Contrast (SeCo) is an effective pipeline to leverage unlabeled data for in-domain pre-training of re-mote sensing representations.
SeCo will be made public to facilitate transfer learning and enable rapid progress in re-mote sensing applications.
arXiv Detail & Related papers (2021-03-30T18:26:39Z) - Universal Representation Learning from Multiple Domains for Few-shot
Classification [41.821234589075445]
We propose to learn a single set of universal deep representations by distilling knowledge of multiple separately trained networks.
We show that the universal representations can be further refined for previously unseen domains by an efficient adaptation step.
arXiv Detail & Related papers (2021-03-25T13:49:12Z) - Geography-Aware Self-Supervised Learning [79.4009241781968]
We show that due to their different characteristics, a non-trivial gap persists between contrastive and supervised learning on standard benchmarks.
We propose novel training methods that exploit the spatially aligned structure of remote sensing data.
Our experiments show that our proposed method closes the gap between contrastive and supervised learning on image classification, object detection and semantic segmentation for remote sensing.
arXiv Detail & Related papers (2020-11-19T17:29:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.