Towards fully differentiable neural ocean model with Veros
- URL: http://arxiv.org/abs/2511.17427v1
- Date: Fri, 21 Nov 2025 17:24:00 GMT
- Title: Towards fully differentiable neural ocean model with Veros
- Authors: Etienne Meunier, Said Ouala, Hugo Frezat, Julien Le Sommer, Ronan Fablet,
- Abstract summary: We present a differentiable extension of the VEROS ocean model, enabling automatic differentiation through its dynamical core.<n>We describe the key modifications required to make the model fully compatible with JAX autodifferentiation framework and evaluate the numerical consistency of the resulting implementation.
- Score: 8.204579381159597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a differentiable extension of the VEROS ocean model, enabling automatic differentiation through its dynamical core. We describe the key modifications required to make the model fully compatible with JAX autodifferentiation framework and evaluate the numerical consistency of the resulting implementation. Two illustrative applications are then demonstrated: (i) the correction of an initial ocean state through gradient-based optimization, and (ii) the calibration of unknown physical parameters directly from model observations. These examples highlight how differentiable programming can facilitate end-to-end learning and parameter tuning in ocean modeling. Our implementation is available online.
Related papers
- Image Segmentation via Variational Model Based Tailored UNet: A Deep Variational Framework [6.146992603795658]
We propose Variational Model Based Tailored UNet (VM_TUNet) for image segmentation.<n>VM_TUNet combines the interpretability and edge-preserving properties of variational methods with the adaptive feature learning of neural networks.<n>We show that VM_TUNet achieves superior segmentation performance compared to existing approaches.
arXiv Detail & Related papers (2025-05-09T05:50:22Z) - Model Assembly Learning with Heterogeneous Layer Weight Merging [57.8462476398611]
We introduce Model Assembly Learning (MAL), a novel paradigm for model merging.<n>MAL integrates parameters from diverse models in an open-ended model zoo to enhance the base model's capabilities.
arXiv Detail & Related papers (2025-03-27T16:21:53Z) - Recursive Learning of Asymptotic Variational Objectives [49.69399307452126]
General state-space models (SSMs) are widely used in statistical machine learning and are among the most classical generative models for sequential time-series data.
Online sequential IWAE (OSIWAE) allows for online learning of both model parameters and a Markovian recognition model for inferring latent states.
This approach is more theoretically well-founded than recently proposed online variational SMC methods.
arXiv Detail & Related papers (2024-11-04T16:12:37Z) - SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Online Calibration of Deep Learning Sub-Models for Hybrid Numerical
Modeling Systems [34.50407690251862]
We present an efficient and practical online learning approach for hybrid systems.
We demonstrate that the method, called EGA for Euler Gradient Approximation, converges to the exact gradients in the limit of infinitely small time steps.
Results show significant improvements over offline learning, highlighting the potential of end-to-end online learning for hybrid modeling.
arXiv Detail & Related papers (2023-11-17T17:36:26Z) - Automatic Parameterization for Aerodynamic Shape Optimization via Deep
Geometric Learning [60.69217130006758]
We propose two deep learning models that fully automate shape parameterization for aerodynamic shape optimization.
Both models are optimized to parameterize via deep geometric learning to embed human prior knowledge into learned geometric patterns.
We perform shape optimization experiments on 2D airfoils and discuss the applicable scenarios for the two models.
arXiv Detail & Related papers (2023-05-03T13:45:40Z) - Differentiable, learnable, regionalized process-based models with
physical outputs can approach state-of-the-art hydrologic prediction accuracy [1.181206257787103]
We show that differentiable, learnable, process-based models (called delta models here) can approach the performance level of LSTM for the intensively-observed variable (streamflow) with regionalized parameterization.
We use a simple hydrologic model HBV as the backbone and use embedded neural networks, which can only be trained in a differentiable programming framework.
arXiv Detail & Related papers (2022-03-28T15:06:53Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Automatic Learning of Subword Dependent Model Scales [50.105894487730545]
We show that the model scales for a combination of attention encoder-decoder acoustic model and language model can be learned as effectively as with manual tuning.
We extend this approach to subword dependent model scales which could not be tuned manually which leads to 7% improvement on LBS and 3% on SWB.
arXiv Detail & Related papers (2021-10-18T13:48:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.