Uncertainty quantification for deeponets with ensemble kalman inversion
- URL: http://arxiv.org/abs/2403.03444v1
- Date: Wed, 6 Mar 2024 04:02:30 GMT
- Title: Uncertainty quantification for deeponets with ensemble kalman inversion
- Authors: Andrew Pensoneault, Xueyu Zhu
- Abstract summary: In this work, we propose a novel inference approach for efficient uncertainty quantification (UQ) for operator learning by harnessing the power of the Ensemble Kalman Inversion (EKI) approach.
EKI is known for its derivative-free, noise-robust, and highly parallelizable feature, and has demonstrated its advantages for UQ for physics-informed neural networks.
We deploy a mini-batch variant of EKI to accommodate larger datasets, mitigating the computational demand due to large datasets during the training stage.
- Score: 0.8158530638728501
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, operator learning, particularly the DeepONet, has received
much attention for efficiently learning complex mappings between input and
output functions across diverse fields. However, in practical scenarios with
limited and noisy data, accessing the uncertainty in DeepONet predictions
becomes essential, especially in mission-critical or safety-critical
applications. Existing methods, either computationally intensive or yielding
unsatisfactory uncertainty quantification, leave room for developing efficient
and informative uncertainty quantification (UQ) techniques tailored for
DeepONets. In this work, we proposed a novel inference approach for efficient
UQ for operator learning by harnessing the power of the Ensemble Kalman
Inversion (EKI) approach. EKI, known for its derivative-free, noise-robust, and
highly parallelizable feature, has demonstrated its advantages for UQ for
physics-informed neural networks [28]. Our innovative application of EKI
enables us to efficiently train ensembles of DeepONets while obtaining
informative uncertainty estimates for the output of interest. We deploy a
mini-batch variant of EKI to accommodate larger datasets, mitigating the
computational demand due to large datasets during the training stage.
Furthermore, we introduce a heuristic method to estimate the artificial
dynamics covariance, thereby improving our uncertainty estimates. Finally, we
demonstrate the effectiveness and versatility of our proposed methodology
across various benchmark problems, showcasing its potential to address the
pressing challenges of uncertainty quantification in DeepONets, especially for
practical applications with limited and noisy data.
Related papers
- Linear-Time User-Level DP-SCO via Robust Statistics [55.350093142673316]
User-level differentially private convex optimization (DP-SCO) has garnered significant attention due to the importance of safeguarding user privacy in machine learning applications.
Current methods, such as those based on differentially private gradient descent (DP-SGD), often struggle with high noise accumulation and suboptimal utility.
We introduce a novel linear-time algorithm that leverages robust statistics, specifically the median and trimmed mean, to overcome these challenges.
arXiv Detail & Related papers (2025-02-13T02:05:45Z) - Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - Neural auto-designer for enhanced quantum kernels [59.616404192966016]
We present a data-driven approach that automates the design of problem-specific quantum feature maps.
Our work highlights the substantial role of deep learning in advancing quantum machine learning.
arXiv Detail & Related papers (2024-01-20T03:11:59Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - DUDES: Deep Uncertainty Distillation using Ensembles for Semantic
Segmentation [11.099838952805325]
Quantifying the predictive uncertainty is a promising endeavour to open up the use of deep neural networks for such applications.
We present a novel approach for efficient and reliable uncertainty estimation which we call Deep Uncertainty Distillation using Ensembles (DUDES)
DUDES applies student-teacher distillation with a Deep Ensemble to accurately approximate predictive uncertainties with a single forward pass.
arXiv Detail & Related papers (2023-03-17T08:56:27Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Incremental Permutation Feature Importance (iPFI): Towards Online
Explanations on Data Streams [8.49072000414555]
We are interested in dynamic scenarios where data is sampled progressively, and learning is done in an incremental rather than a batch mode.
We seek efficient incremental algorithms for computing feature importance (FI) measures, specifically, an incremental FI measure based on feature marginalization of absent features similar to permutation feature importance (PFI)
arXiv Detail & Related papers (2022-09-05T12:34:27Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - On Efficient Uncertainty Estimation for Resource-Constrained Mobile
Applications [0.0]
Predictive uncertainty supplements model predictions and enables improved functionality of downstream tasks.
We tackle this problem by building upon Monte Carlo Dropout (MCDO) models using the Axolotl framework.
We conduct experiments on (1) a multi-class classification task using the CIFAR10 dataset, and (2) a more complex human body segmentation task.
arXiv Detail & Related papers (2021-11-11T22:24:15Z) - Interval Deep Learning for Uncertainty Quantification in Safety
Applications [0.0]
Current deep neural networks (DNNs) do not have an implicit mechanism to quantify and propagate significant input data uncertainty.
We present a DNN optimized with gradient-based methods capable to quantify input and parameter uncertainty by means of interval analysis.
We show that the Deep Interval Neural Network (DINN) can produce accurate bounded estimates from uncertain input data.
arXiv Detail & Related papers (2021-05-13T17:21:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.