Evaluating and Benchmarking Foundation Models for Earth Observation and Geospatial AI
- URL: http://arxiv.org/abs/2406.18295v1
- Date: Wed, 26 Jun 2024 12:27:06 GMT
- Title: Evaluating and Benchmarking Foundation Models for Earth Observation and Geospatial AI
- Authors: Nikolaos Dionelis, Casper Fibaek, Luke Camilleri, Andreas Luyts, Jente Bosmans, Bertrand Le Saux,
- Abstract summary: We focus on the specific Computer Vision application of Foundation Models for Earth Observation (EO) and geospatial AI.
We show that for a limited number of labelled data, Foundation Models achieve improved performance compared to problem-specific models.
We present the results using our evaluation benchmark for EO Foundation Models and show that Foundation Models are label efficient in the downstream tasks.
- Score: 26.986832126456413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When we are primarily interested in solving several problems jointly with a given prescribed high performance accuracy for each target application, then Foundation Models should for most cases be used rather than problem-specific models. We focus on the specific Computer Vision application of Foundation Models for Earth Observation (EO) and geospatial AI. These models can solve important problems we are tackling, including for example land cover classification, crop type mapping, flood segmentation, building density estimation, and road regression segmentation. In this paper, we show that for a limited number of labelled data, Foundation Models achieve improved performance compared to problem-specific models. In this work, we also present our proposed evaluation benchmark for Foundation Models for EO. Benchmarking the generalization performance of Foundation Models is important as it has become difficult to standardize a fair comparison across the many different models that have been proposed recently. We present the results using our evaluation benchmark for EO Foundation Models and show that Foundation Models are label efficient in the downstream tasks and help us solve problems we are tackling in EO and remote sensing.
Related papers
- Improving QA Model Performance with Cartographic Inoculation [0.0]
"Dataset artifacts" reduce the model's ability to generalize to real-world QA problems.
We analyze the impacts and incidence of dataset artifacts using an adversarial challenge set.
We show that by selectively fine-tuning a model on ambiguous adversarial examples from a challenge set, significant performance improvements can be made.
arXiv Detail & Related papers (2024-01-30T23:08:26Z) - PhilEO Bench: Evaluating Geo-Spatial Foundation Models [30.02962498304698]
This paper introduces the PhilEO Bench, a novel evaluation framework for EO Foundation Models.
The framework comprises of a testbed and a novel 400 GB Sentinel-2 dataset.
We present experiments using our framework evaluating different Foundation Models, including Prithvi and SatMAE.
arXiv Detail & Related papers (2024-01-09T09:58:42Z) - Open World Object Detection in the Era of Foundation Models [53.683963161370585]
We introduce a new benchmark that includes five real-world application-driven datasets.
We introduce a novel method, Foundation Object detection Model for the Open world, or FOMO, which identifies unknown objects based on their shared attributes with the base known objects.
arXiv Detail & Related papers (2023-12-10T03:56:06Z) - Generalized Logit Adjustment: Calibrating Fine-tuned Models by Removing Label Bias in Foundation Models [75.9543301303586]
Foundation models like CLIP allow zero-shot transfer on various tasks without additional training data.
Fine-tuning and ensembling are also commonly adopted to better fit the downstream tasks.
However, we argue that prior work has overlooked the inherent biases in foundation models.
arXiv Detail & Related papers (2023-10-12T08:01:11Z) - CHORUS: Foundation Models for Unified Data Discovery and Exploration [6.85448651843431]
We show that foundation models are highly applicable to the data discovery and data exploration domain.
We show that a foundation-model-based approach outperforms the task-specific models and so the state of the art.
This suggests a future direction in which disparate data management tasks can be unified under foundation models.
arXiv Detail & Related papers (2023-06-16T03:58:42Z) - GEO-Bench: Toward Foundation Models for Earth Monitoring [139.77907168809085]
We propose a benchmark comprised of six classification and six segmentation tasks.
This benchmark will be a driver of progress across a variety of Earth monitoring tasks.
arXiv Detail & Related papers (2023-06-06T16:16:05Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - Towards Efficient Task-Driven Model Reprogramming with Foundation Models [52.411508216448716]
Vision foundation models exhibit impressive power, benefiting from the extremely large model capacity and broad training data.
However, in practice, downstream scenarios may only support a small model due to the limited computational resources or efficiency considerations.
This brings a critical challenge for the real-world application of foundation models: one has to transfer the knowledge of a foundation model to the downstream task.
arXiv Detail & Related papers (2023-04-05T07:28:33Z) - Studying How to Efficiently and Effectively Guide Models with Explanations [52.498055901649025]
'Model guidance' is the idea of regularizing the models' explanations to ensure that they are "right for the right reasons"
We conduct an in-depth evaluation across various loss functions, attribution methods, models, and 'guidance depths' on the PASCAL VOC 2007 and MS COCO 2014 datasets.
Specifically, we guide the models via bounding box annotations, which are much cheaper to obtain than the commonly used segmentation masks.
arXiv Detail & Related papers (2023-03-21T15:34:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.