Ensembles of Neural Surrogates for Parametric Sensitivity in Ocean Modeling
- URL: http://arxiv.org/abs/2508.16489v2
- Date: Tue, 26 Aug 2025 13:40:50 GMT
- Title: Ensembles of Neural Surrogates for Parametric Sensitivity in Ocean Modeling
- Authors: Yixuan Sun, Romain Egele, Sri Hari Krishna Narayanan, Luke Van Roekel, Carmelo Gonzales, Steven Brus, Balu Nadiga, Sandeep Madireddy, Prasanna Balaprakash,
- Abstract summary: We use deep learning surrogates to improve both forward predictions, autoregressive rollout, and backward adjoint sensitivity.<n>In this work, we leverage large-scale hyper parameter search and ensemble learning to improve both forward predictions, autoregressive rollout, and backward adjoint sensitivity.
- Score: 10.718935131235261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate simulations of the oceans are crucial in understanding the Earth system. Despite their efficiency, simulations at lower resolutions must rely on various uncertain parameterizations to account for unresolved processes. However, model sensitivity to parameterizations is difficult to quantify, making it challenging to tune these parameterizations to reproduce observations. Deep learning surrogates have shown promise for efficient computation of the parametric sensitivities in the form of partial derivatives, but their reliability is difficult to evaluate without ground truth derivatives. In this work, we leverage large-scale hyperparameter search and ensemble learning to improve both forward predictions, autoregressive rollout, and backward adjoint sensitivity estimation. Particularly, the ensemble method provides epistemic uncertainty of function value predictions and their derivatives, providing improved reliability of the neural surrogates in decision making.
Related papers
- Enhancing Uncertainty Estimation and Interpretability via Bayesian Non-negative Decision Layer [55.66973223528494]
We develop a Bayesian Non-negative Decision Layer (BNDL), which reformulates deep neural networks as a conditional Bayesian non-negative factor analysis.<n>BNDL can model complex dependencies and provide robust uncertainty estimation.<n>We also offer theoretical guarantees that BNDL can achieve effective disentangled learning.
arXiv Detail & Related papers (2025-05-28T10:23:34Z) - Contrastive Normalizing Flows for Uncertainty-Aware Parameter Estimation [0.0]
Estimating physical parameters from data is a crucial application of machine learning (ML) in the physical sciences.<n>We introduce a novel approach based on Contrastive Normalizing Flows (CNFs), which achieves top performance on the HiggsML Uncertainty Challenge dataset.
arXiv Detail & Related papers (2025-05-13T16:14:34Z) - Generalizable Implicit Neural Representations via Parameterized Latent Dynamics for Baroclinic Ocean Forecasting [15.223198342339803]
PINROD is a novel framework combining dynamics-aware implicit neural representations with parameterized neural ordinary differential equations.<n>Experiments on ocean mesoscale parametric activity show superior accuracy over existing baselines.
arXiv Detail & Related papers (2025-03-27T15:04:52Z) - Scrambling for precision: optimizing multiparameter qubit estimation in the face of sloppiness and incompatibility [0.0]
We explore the connection between sloppiness and incompatibility by introducing an adjustable scrambling operation for parameter encoding.<n>Through analytical optimization, we identify strategies to mitigate these constraints and enhance estimation efficiency.
arXiv Detail & Related papers (2025-03-11T09:57:51Z) - Low-Order Flow Reconstruction and Uncertainty Quantification in Disturbed Aerodynamics Using Sparse Pressure Measurements [0.0]
This paper presents a novel machine-learning framework for reconstructing low-order gustencounter flow field and lift coefficients from sparse, noisy surface pressure measurements.<n>Our study thoroughly investigates the time-varying response of sensors to gust-air interactions, uncovering valuable insights into optimal sensor placement.
arXiv Detail & Related papers (2025-01-06T22:02:06Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Multiparameter estimation perspective on non-Hermitian
singularity-enhanced sensing [0.0]
We study the possibility of achieving unbounded sensitivity when using the system to sense linear singularity perturbations away from a singular point.
We identify under what conditions and at what rate can the resulting sensitivity indeed diverge, in order to show that nuisance parameters should be generally included in the analysis.
arXiv Detail & Related papers (2023-03-09T19:00:09Z) - Probabilities Are Not Enough: Formal Controller Synthesis for Stochastic
Dynamical Models with Epistemic Uncertainty [68.00748155945047]
Capturing uncertainty in models of complex dynamical systems is crucial to designing safe controllers.
Several approaches use formal abstractions to synthesize policies that satisfy temporal specifications related to safety and reachability.
Our contribution is a novel abstraction-based controller method for continuous-state models with noise, uncertain parameters, and external disturbances.
arXiv Detail & Related papers (2022-10-12T07:57:03Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for
Training Large Transformer Models [132.90062129639705]
We propose a novel training strategy that encourages all parameters to be trained sufficiently.
A parameter with low sensitivity is redundant, and we improve its fitting by increasing its learning rate.
In contrast, a parameter with high sensitivity is well-trained and we regularize it by decreasing its learning rate to prevent further overfitting.
arXiv Detail & Related papers (2022-02-06T00:22:28Z) - Evaluating Sensitivity to the Stick-Breaking Prior in Bayesian
Nonparametrics [85.31247588089686]
We show that variational Bayesian methods can yield sensitivities with respect to parametric and nonparametric aspects of Bayesian models.
We provide both theoretical and empirical support for our variational approach to Bayesian sensitivity analysis.
arXiv Detail & Related papers (2021-07-08T03:40:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.