Self-Supervised Pretraining on Satellite Imagery: a Case Study on
Label-Efficient Vehicle Detection
- URL: http://arxiv.org/abs/2210.11815v1
- Date: Fri, 21 Oct 2022 08:41:22 GMT
- Title: Self-Supervised Pretraining on Satellite Imagery: a Case Study on
Label-Efficient Vehicle Detection
- Authors: Jules BOURCIER (Thoth), Thomas Floquet, Gohar Dashyan, Tugdual
Ceillier, Karteek Alahari (Thoth), Jocelyn Chanussot (Thoth)
- Abstract summary: We study in-domain self-supervised representation learning for object detection on very high resolution optical satellite imagery.
We use the large land use classification dataset Functional Map of the World to pretrain representations with an extension of the Momentum Contrast framework.
We then investigate this model's transferability on a real-world task of fine-grained vehicle detection and classification on Preligens proprietary data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In defense-related remote sensing applications, such as vehicle detection on
satellite imagery, supervised learning requires a huge number of labeled
examples to reach operational performances. Such data are challenging to obtain
as it requires military experts, and some observables are intrinsically rare.
This limited labeling capability, as well as the large number of unlabeled
images available due to the growing number of sensors, make object detection on
remote sensing imagery highly relevant for self-supervised learning. We study
in-domain self-supervised representation learning for object detection on very
high resolution optical satellite imagery, that is yet poorly explored. For the
first time to our knowledge, we study the problem of label efficiency on this
task. We use the large land use classification dataset Functional Map of the
World to pretrain representations with an extension of the Momentum Contrast
framework. We then investigate this model's transferability on a real-world
task of fine-grained vehicle detection and classification on Preligens
proprietary data, which is designed to be representative of an operational use
case of strategic site surveillance. We show that our in-domain self-supervised
learning model is competitive with ImageNet pretraining, and outperforms it in
the low-label regime.
Related papers
- Leveraging Self-Supervised Instance Contrastive Learning for Radar
Object Detection [7.728838099011661]
This paper presents RiCL, an instance contrastive learning framework to pre-train radar object detectors.
We aim to pre-train an object detector's backbone, head and neck to learn with fewer data.
arXiv Detail & Related papers (2024-02-13T12:53:33Z) - Generalized Few-Shot 3D Object Detection of LiDAR Point Cloud for
Autonomous Driving [91.39625612027386]
We propose a novel task, called generalized few-shot 3D object detection, where we have a large amount of training data for common (base) objects, but only a few data for rare (novel) classes.
Specifically, we analyze in-depth differences between images and point clouds, and then present a practical principle for the few-shot setting in the 3D LiDAR dataset.
To solve this task, we propose an incremental fine-tuning method to extend existing 3D detection models to recognize both common and rare objects.
arXiv Detail & Related papers (2023-02-08T07:11:36Z) - Evaluating the Label Efficiency of Contrastive Self-Supervised Learning
for Multi-Resolution Satellite Imagery [0.0]
Self-supervised learning has been applied in the remote sensing domain to exploit readily-available unlabeled data.
In this paper, we study self-supervised visual representation learning through the lens of label efficiency.
arXiv Detail & Related papers (2022-10-13T06:54:13Z) - Improving performance of aircraft detection in satellite imagery while
limiting the labelling effort: Hybrid active learning [0.9379652654427957]
In the defense domain, aircraft detection on satellite imagery is a valuable tool for analysts.
We propose a hybrid clustering active learning method to select the most relevant data to label.
We show that this method can provide better or competitive results compared to other active learning methods.
arXiv Detail & Related papers (2022-02-10T08:24:07Z) - Unsupervised Domain Adaption of Object Detectors: A Survey [87.08473838767235]
Recent advances in deep learning have led to the development of accurate and efficient models for various computer vision applications.
Learning highly accurate models relies on the availability of datasets with a large number of annotated images.
Due to this, model performance drops drastically when evaluated on label-scarce datasets having visually distinct images.
arXiv Detail & Related papers (2021-05-27T23:34:06Z) - Instance Localization for Self-supervised Detection Pretraining [68.24102560821623]
We propose a new self-supervised pretext task, called instance localization.
We show that integration of bounding boxes into pretraining promotes better task alignment and architecture alignment for transfer learning.
Experimental results demonstrate that our approach yields state-of-the-art transfer learning results for object detection.
arXiv Detail & Related papers (2021-02-16T17:58:57Z) - Active learning for object detection in high-resolution satellite images [1.6500749121196985]
This study aims at reviewing the most relevant active learning techniques to be used for object detection on very high resolution imagery.
It shows an example of the value of such techniques on a relevant operational use case: aircraft detection.
arXiv Detail & Related papers (2021-01-07T10:57:38Z) - Geography-Aware Self-Supervised Learning [79.4009241781968]
We show that due to their different characteristics, a non-trivial gap persists between contrastive and supervised learning on standard benchmarks.
We propose novel training methods that exploit the spatially aligned structure of remote sensing data.
Our experiments show that our proposed method closes the gap between contrastive and supervised learning on image classification, object detection and semantic segmentation for remote sensing.
arXiv Detail & Related papers (2020-11-19T17:29:13Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z) - Co-training for On-board Deep Object Detection [0.0]
Best performing deep vision-based object detectors are trained in a supervised manner by relying on human-labeled bounding boxes.
Co-training is a semi-supervised learning method for self-labeling objects in unlabeled images.
We show how co-training is a paradigm worth to pursue for alleviating object labeling, working both alone and together with task-agnostic domain adaptation.
arXiv Detail & Related papers (2020-08-12T19:08:59Z) - AutoOD: Automated Outlier Detection via Curiosity-guided Search and
Self-imitation Learning [72.99415402575886]
Outlier detection is an important data mining task with numerous practical applications.
We propose AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model.
Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoOD achieves the best performance.
arXiv Detail & Related papers (2020-06-19T18:57:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.