Combining data assimilation and machine learning to estimate parameters
of a convective-scale model
- URL: http://arxiv.org/abs/2109.02953v1
- Date: Tue, 7 Sep 2021 09:17:29 GMT
- Title: Combining data assimilation and machine learning to estimate parameters
of a convective-scale model
- Authors: Stefanie Legler, Tijana Janjic
- Abstract summary: Errors in the representation of clouds in convection-permitting numerical weather prediction models can be introduced by different sources.
In this work, we look at the problem of parameter estimation through an artificial intelligence lens by training two types of artificial neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Errors in the representation of clouds in convection-permitting numerical
weather prediction models can be introduced by different sources. These can be
the forcing and boundary conditions, the representation of orography, the
accuracy of the numerical schemes determining the evolution of humidity and
temperature, but large contributions are due to the parametrization of
microphysics and the parametrization of processes in the surface and boundary
layers. These schemes typically contain several tunable parameters that are
either not physical or only crudely known, leading to model errors.
Traditionally, the numerical values of these model parameters are chosen by
manual model tuning. More objectively, they can be estimated from observations
by the augmented state approach during the data assimilation. Alternatively, in
this work, we look at the problem of parameter estimation through an artificial
intelligence lens by training two types of artificial neural networks (ANNs) to
estimate several parameters of the one-dimensional modified shallow-water model
as a function of the observations or analysis of the atmospheric state. Through
perfect model experiments, we show that Bayesian neural networks (BNNs) and
Bayesian approximations of point estimate neural networks (NNs) are able to
estimate model parameters and their relevant statistics. The estimation of
parameters combined with data assimilation for the state decreases the initial
state errors even when assimilating sparse and noisy observations. The
sensitivity to the number of ensemble members, observation coverage, and neural
network size is shown. Additionally, we use the method of layer-wise relevance
propagation to gain insight into how the ANNs are learning and discover that
they naturally select only a few gridpoints that are subject to strong winds
and rain to make their predictions of chosen parameters.
Related papers
- Latent diffusion models for parameterization and data assimilation of facies-based geomodels [0.0]
Diffusion models are trained to generate new geological realizations from input fields characterized by random noise.
Latent diffusion models are shown to provide realizations that are visually consistent with samples from geomodeling software.
arXiv Detail & Related papers (2024-06-21T01:32:03Z) - Efficient modeling of sub-kilometer surface wind with Gaussian processes and neural networks [0.0]
Wind represents a particularly challenging variable to model due to its high spatial and temporal variability.
This paper presents a novel approach that integrates Gaussian processes (GPs) and neural networks to model surface wind gusts.
We discuss the effect of different modeling choices, as well as different degrees of approximation, and present our results for a case study.
arXiv Detail & Related papers (2024-05-21T09:07:47Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Physics-constrained deep neural network method for estimating parameters
in a redox flow battery [68.8204255655161]
We present a physics-constrained deep neural network (PCDNN) method for parameter estimation in the zero-dimensional (0D) model of the vanadium flow battery (VRFB)
We show that the PCDNN method can estimate model parameters for a range of operating conditions and improve the 0D model prediction of voltage.
We also demonstrate that the PCDNN approach has an improved generalization ability for estimating parameter values for operating conditions not used in the training.
arXiv Detail & Related papers (2021-06-21T23:42:58Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Parameter Estimation with Dense and Convolutional Neural Networks
Applied to the FitzHugh-Nagumo ODE [0.0]
We present deep neural networks using dense and convolutional layers to solve an inverse problem, where we seek to estimate parameters of a Fitz-Nagumo model.
We demonstrate that deep neural networks have the potential to estimate parameters in dynamical models and processes, and they are capable of predicting parameters accurately for the framework.
arXiv Detail & Related papers (2020-12-12T01:20:42Z) - Stochastic analysis of heterogeneous porous material with modified
neural architecture search (NAS) based physics-informed neural networks using
transfer learning [0.0]
modified neural architecture search method (NAS) based physics-informed deep learning model is presented.
A three dimensional flow model is built to provide a benchmark to the simulation of groundwater flow in highly heterogeneous aquifers.
arXiv Detail & Related papers (2020-10-03T19:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.