Climate Model Tuning with Online Synchronization-Based Parameter Estimation
- URL: http://arxiv.org/abs/2510.06180v1
- Date: Tue, 07 Oct 2025 17:43:11 GMT
- Title: Climate Model Tuning with Online Synchronization-Based Parameter Estimation
- Authors: Jordan Seneca, Suzanne Bintanja, Frank M. Selten,
- Abstract summary: We show the potential of a parameter estimation algorithm which makes use of synchronization to tune a global atmospheric model.<n>We then apply the algorithm to the weights of each member of a supermodel ensemble to optimize the overall predictions.<n>We introduce a novel approach which combines both methods called adaptive supermodeling.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In climate science, the tuning of climate models is a computationally intensive problem due to the combination of the high-dimensionality of the system state and long integration times. Here we demonstrate the potential of a parameter estimation algorithm which makes use of synchronization to tune a global atmospheric model at modest computational costs. We first use it to directly optimize internal model parameters. We then apply the algorithm to the weights of each member of a supermodel ensemble to optimize the overall predictions. In both cases, the algorithm is able to find parameters which result in reduced errors in the climatology of the model. Finally, we introduce a novel approach which combines both methods called adaptive supermodeling, where the internal parameters of the members of a supermodel are tuned simultaneously with the model weights such that the supermodel predictions are optimized. For a case designed to challenge the two previous methods, adaptive supermodeling achieves a performance similar to a perfect model.
Related papers
- Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging [75.93960998357812]
Deep model merging represents an emerging research direction that combines multiple fine-tuned models to harness their capabilities across different tasks and domains.<n>Current model merging techniques focus on merging all available models simultaneously, with weight matrices-based methods being the predominant approaches.<n>We propose a training-free projection-based continual merging method that processes models sequentially.
arXiv Detail & Related papers (2025-01-16T13:17:24Z) - Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Epidemic Modeling using Hybrid of Time-varying SIRD, Particle Swarm
Optimization, and Deep Learning [6.363653898208231]
Epidemiological models are best suitable to model an epidemic if the spread pattern is stationary.
We develop a hybrid model encompassing epidemic modeling, particle swarm optimization, and deep learning.
We evaluate the model for highly affected three countries namely; the USA, India, and the UK.
arXiv Detail & Related papers (2024-01-31T18:08:06Z) - A global optimization SAR image segmentation model can be easily transformed to a general ROF denoising model [0.881121308982678]
We transform the proposed model into a global optimization model by using convex relaxation technique.<n>We propose two fast models to solve the global optimization model.<n>Experiments using some challenging synthetic images and Envisat SAR images demonstrate the superiority of our proposed models.
arXiv Detail & Related papers (2023-12-08T23:26:57Z) - Predictive Modeling through Hyper-Bayesian Optimization [60.586813904500595]
We propose a novel way of integrating model selection and BO for the single goal of reaching the function optima faster.
The algorithm moves back and forth between BO in the model space and BO in the function space, where the goodness of the recommended model is captured.
In addition to improved sample efficiency, the framework outputs information about the black-box function.
arXiv Detail & Related papers (2023-08-01T04:46:58Z) - Understanding Parameter Sharing in Transformers [53.75988363281843]
Previous work on Transformers has focused on sharing parameters in different layers, which can improve the performance of models with limited parameters by increasing model depth.
We show that the success of this approach can be largely attributed to better convergence, with only a small part due to the increased model complexity.
Experiments on 8 machine translation tasks show that our model achieves competitive performance with only half the model complexity of parameter sharing models.
arXiv Detail & Related papers (2023-06-15T10:48:59Z) - An Efficient Hierarchical Kriging Modeling Method for High-dimension
Multi-fidelity Problems [0.0]
Multi-fidelity Kriging model is a promising technique in surrogate-based design.
The cost for building a multi-fidelity Kriging model increases significantly with the increase of the problem dimension.
An efficient Hierarchical Kriging modeling method is proposed to attack this issue.
arXiv Detail & Related papers (2022-12-31T15:17:07Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - Adaptive LASSO estimation for functional hidden dynamic geostatistical
model [69.10717733870575]
We propose a novel model selection algorithm based on a penalized maximum likelihood estimator (PMLE) for functional hiddenstatistical models (f-HD)
The algorithm is based on iterative optimisation and uses an adaptive least absolute shrinkage and selector operator (GMSOLAS) penalty function, wherein the weights are obtained by the unpenalised f-HD maximum-likelihood estimators.
arXiv Detail & Related papers (2022-08-10T19:17:45Z) - An efficient estimation of time-varying parameters of dynamic models by
combining offline batch optimization and online data assimilation [0.0]
I present an efficient and practical method to estimate the time-varying parameters of relatively low dimensional models.
I propose combining offline batch optimization and online data assimilation.
arXiv Detail & Related papers (2021-10-24T20:12:12Z) - Optimizing model-agnostic Random Subspace ensembles [5.680512932725364]
We present a model-agnostic ensemble approach for supervised learning.
The proposed approach alternates between learning an ensemble of models using a parametric version of the Random Subspace approach.
We show the good performance of the proposed approach, both in terms of prediction and feature ranking, on simulated and real-world datasets.
arXiv Detail & Related papers (2021-09-07T13:58:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.