WildSAT: Learning Satellite Image Representations from Wildlife Observations
- URL: http://arxiv.org/abs/2412.14428v1
- Date: Thu, 19 Dec 2024 00:52:25 GMT
- Title: WildSAT: Learning Satellite Image Representations from Wildlife Observations
- Authors: Rangel Daroya, Elijah Cole, Oisin Mac Aodha, Grant Van Horn, Subhransu Maji,
- Abstract summary: We introduce WildSAT, which pairs satellite images with millions of geo-tagged observations readily-available on citizen science platforms.
We demonstrate that WildSAT achieves better representations than recent methods that utilize other forms of cross-modal supervision.
- Score: 33.660389502623644
- License:
- Abstract: What does the presence of a species reveal about a geographic location? We posit that habitat, climate, and environmental preferences reflected in species distributions provide a rich source of supervision for learning satellite image representations. We introduce WildSAT, which pairs satellite images with millions of geo-tagged wildlife observations readily-available on citizen science platforms. WildSAT uses a contrastive learning framework to combine information from species distribution maps with text descriptions that capture habitat and range details, alongside satellite images, to train or fine-tune models. On a range of downstream satellite image recognition tasks, this significantly improves the performance of both randomly initialized models and pre-trained models from sources like ImageNet or specialized satellite image datasets. Additionally, the alignment with text enables zero-shot retrieval, allowing for search based on general descriptions of locations. We demonstrate that WildSAT achieves better representations than recent methods that utilize other forms of cross-modal supervision, such as aligning satellite images with ground images or wildlife photos. Finally, we analyze the impact of various design choices on downstream performance, highlighting the general applicability of our approach.
Related papers
- Weakly-supervised Camera Localization by Ground-to-satellite Image Registration [52.54992898069471]
We propose a weakly supervised learning strategy for ground-to-satellite image registration.
It derives positive and negative satellite images for each ground image.
We also propose a self-supervision strategy for cross-view image relative rotation estimation.
arXiv Detail & Related papers (2024-09-10T12:57:16Z) - Geospecific View Generation -- Geometry-Context Aware High-resolution Ground View Inference from Satellite Views [5.146618378243241]
We propose a novel pipeline to generate geospecifc views that maximally respect the weak geometry and texture from multi-view satellite images.
Our method directly predicts ground-view images at geolocation by using a comprehensive set of information from the satellite image.
We demonstrate our pipeline is the first to generate close-to-real and geospecific ground views merely based on satellite images.
arXiv Detail & Related papers (2024-07-10T21:51:50Z) - Using Texture to Classify Forests Separately from Vegetation [0.0]
This paper presents an initial proposal for a static, algorithmic process to identify forest regions in satellite image data.
With strong initial results, this paper also identifies the next steps to improve the accuracy of the classification and verification processes.
arXiv Detail & Related papers (2024-05-01T00:48:55Z) - Vehicle Perception from Satellite [54.07157185000604]
The dataset is constructed based on 12 satellite videos and 14 synthetic videos recorded from GTA-V.
It supports several tasks, including tiny object detection, counting and density estimation.
128,801 vehicles are annotated totally, and the number of vehicles in each image varies from 0 to 101.
arXiv Detail & Related papers (2024-02-01T15:59:16Z) - DiffusionSat: A Generative Foundation Model for Satellite Imagery [63.2807119794691]
We present DiffusionSat, to date the largest generative foundation model trained on a collection of publicly available large, high-resolution remote sensing datasets.
Our method produces realistic samples and can be used to solve multiple generative tasks including temporal generation, superresolution given multi-spectral inputs and in-painting.
arXiv Detail & Related papers (2023-12-06T16:53:17Z) - Multimodal Foundation Models for Zero-shot Animal Species Recognition in
Camera Trap Images [57.96659470133514]
Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe.
Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts.
Reducing the reliance on costly labelled data has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor.
arXiv Detail & Related papers (2023-11-02T08:32:00Z) - Unsupervised Discovery of Semantic Concepts in Satellite Imagery with
Style-based Wavelet-driven Generative Models [27.62417543307831]
We present the first pre-trained style- and wavelet-based GAN model that can synthesize a wide gamut of realistic satellite images.
We show that by analyzing the intermediate activations of our network, one can discover a multitude of interpretable semantic directions.
arXiv Detail & Related papers (2022-08-03T14:19:24Z) - Manipulating UAV Imagery for Satellite Model Training, Calibration and
Testing [4.514832807541816]
Modern livestock farming is increasingly data driven and relies on efficient remote sensing to gather data over wide areas.
Satellite imagery is one such data source, which is becoming more accessible for farmers as coverage increases and cost falls.
We present a new multi-temporal dataset of high resolution UAV imagery which is artificially degraded to match satellite data quality.
arXiv Detail & Related papers (2022-03-22T03:57:02Z) - Embedding Earth: Self-supervised contrastive pre-training for dense land
cover classification [61.44538721707377]
We present Embedding Earth a self-supervised contrastive pre-training method for leveraging the large availability of satellite imagery.
We observe significant improvements up to 25% absolute mIoU when pre-trained with our proposed method.
We find that learnt features can generalize between disparate regions opening up the possibility of using the proposed pre-training scheme.
arXiv Detail & Related papers (2022-03-11T16:14:14Z) - Coming Down to Earth: Satellite-to-Street View Synthesis for
Geo-Localization [9.333087475006003]
Cross-view image based geo-localization is notoriously challenging due to drastic viewpoint and appearance differences between the two domains.
We show that we can address this discrepancy explicitly by learning to synthesize realistic street views from satellite inputs.
We propose a novel multi-task architecture in which image synthesis and retrieval are considered jointly.
arXiv Detail & Related papers (2021-03-11T17:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.