On the Generalization of Representation Uncertainty in Earth Observation
- URL: http://arxiv.org/abs/2503.07082v1
- Date: Mon, 10 Mar 2025 09:04:50 GMT
- Title: On the Generalization of Representation Uncertainty in Earth Observation
- Authors: Spyros Kondylatos, Nikolaos Ioannis Bountos, Dimitrios Michail, Xiao Xiang Zhu, Gustau Camps-Valls, Ioannis Papoutsis,
- Abstract summary: We investigate the generalization of representation uncertainty in Earth Observation (EO) data.<n>Unlike uncertainties pretrained on natural images, EO-pretraining exhibits strong generalization across unseen EO domains, geographic locations, and target granularities.<n>Initiating the discussion on representation uncertainty in EO, our study provides insights into its strengths and limitations.
- Score: 18.74462879840487
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in Computer Vision have introduced the concept of pretrained representation uncertainty, enabling zero-shot uncertainty estimation. This holds significant potential for Earth Observation (EO), where trustworthiness is critical, yet the complexity of EO data poses challenges to uncertainty-aware methods. In this work, we investigate the generalization of representation uncertainty in EO, considering the domain's unique semantic characteristics. We pretrain uncertainties on large EO datasets and propose an evaluation framework to assess their zero-shot performance in multi-label classification and segmentation EO tasks. Our findings reveal that, unlike uncertainties pretrained on natural images, EO-pretraining exhibits strong generalization across unseen EO domains, geographic locations, and target granularities, while maintaining sensitivity to variations in ground sampling distance. We demonstrate the practical utility of pretrained uncertainties showcasing their alignment with task-specific uncertainties in downstream tasks, their sensitivity to real-world EO image noise, and their ability to generate spatial uncertainty estimates out-of-the-box. Initiating the discussion on representation uncertainty in EO, our study provides insights into its strengths and limitations, paving the way for future research in the field. Code and weights are available at: https://github.com/Orion-AI-Lab/EOUncertaintyGeneralization.
Related papers
- Confidence-Filtered Relevance (CFR): An Interpretable and Uncertainty-Aware Machine Learning Framework for Naturalness Assessment in Satellite Imagery [3.846084066763095]
Confidence-Filtered Relevance (CFR) is a data-centric framework that combines LRP Attention Rollout with Deep Deterministic Uncertainty estimation.<n>CFR partitions the dataset into subsets based on uncertainty thresholds, enabling systematic analysis of how uncertainty shapes explanations of naturalness in satellite imagery.<n>As uncertainty increases, the interpretability of relevance heatmaps declines and their entropy grows, indicating less selective and more ambiguous attributions.
arXiv Detail & Related papers (2025-07-17T12:06:08Z) - A Critical Synthesis of Uncertainty Quantification and Foundation Models in Monocular Depth Estimation [13.062551984263031]
Metric depth estimation, which involves predicting absolute distances, poses particular challenges.<n>We fuse five different uncertainty quantification methods with the current state-of-the-art DepthAnythingV2 foundation model.<n>Our findings identify fine-tuning with the Gaussian Negative Log-Likelihood Loss (GNLL) as a particularly promising approach.
arXiv Detail & Related papers (2025-01-14T15:13:00Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Uncertainty quantification for probabilistic machine learning in earth
observation using conformal prediction [0.22265536092123003]
Unreliable predictions can occur when using artificial intelligence (AI) systems with negative consequences for downstream applications.
Conformal prediction provides a model-agnostic framework for uncertainty quantification that can be applied to any dataset.
In response to the increased need to report uncertainty alongside point predictions, we bring attention to the promise of conformal prediction in Earth Observation applications.
arXiv Detail & Related papers (2024-01-12T07:31:21Z) - Unsupervised Self-Driving Attention Prediction via Uncertainty Mining
and Knowledge Embedding [51.8579160500354]
We propose an unsupervised way to predict self-driving attention by uncertainty modeling and driving knowledge integration.
Results show equivalent or even more impressive performance compared to fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2023-03-17T00:28:33Z) - On Attacking Out-Domain Uncertainty Estimation in Deep Neural Networks [11.929914721626849]
We show that state-of-the-art uncertainty estimation algorithms could fail catastrophically under our proposed adversarial attack.
In particular, we aim at attacking the out-domain uncertainty estimation.
arXiv Detail & Related papers (2022-10-03T23:33:38Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - On the uncertainty of self-supervised monocular depth estimation [52.13311094743952]
Self-supervised paradigms for monocular depth estimation are very appealing since they do not require ground truth annotations at all.
We explore for the first time how to estimate the uncertainty for this task and how this affects depth accuracy.
We propose a novel peculiar technique specifically designed for self-supervised approaches.
arXiv Detail & Related papers (2020-05-13T09:00:55Z) - Uncertainty Estimation in Autoregressive Structured Prediction [16.441252243846534]
This work aims to investigate uncertainty estimation for autoregressive structured prediction tasks.
We consider: uncertainty estimation for sequence data at the token-level and complete sequence-level; interpretations for, and applications of, various measures of uncertainty.
This work also provides baselines for token-level and sequence-level error detection, and sequence-level out-of-domain input detection on the WMT'14 English-French and WMT'17 English-German translation datasets.
arXiv Detail & Related papers (2020-02-18T15:40:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.