Using Deep Learning to Improve Ensemble Smoother: Applications to
Subsurface Characterization
- URL: http://arxiv.org/abs/2002.09100v2
- Date: Tue, 20 Oct 2020 15:15:59 GMT
- Title: Using Deep Learning to Improve Ensemble Smoother: Applications to
Subsurface Characterization
- Authors: Jiangjiang Zhang, Qiang Zheng, Laosheng Wu, Lingzao Zeng
- Abstract summary: Ensemble smoother (ES) has been widely used in various research fields.
ES$_text(DL)$ is an update scheme for ES in complex data assimilation applications.
We show that the DL-based ES method, that is, ES$_text(DL)$, is more general and flexible.
- Score: 2.4373900721120285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensemble smoother (ES) has been widely used in various research fields to
reduce the uncertainty of the system-of-interest. However, the commonly-adopted
ES method that employs the Kalman formula, that is, ES$_\text{(K)}$, does not
perform well when the probability distributions involved are non-Gaussian. To
address this issue, we suggest to use deep learning (DL) to derive an
alternative update scheme for ES in complex data assimilation applications.
Here we show that the DL-based ES method, that is, ES$_\text{(DL)}$, is more
general and flexible. In this new update scheme, a high volume of training data
are generated from a relatively small-sized ensemble of model parameters and
simulation outputs, and possible non-Gaussian features can be preserved in the
training data and captured by an adequate DL model. This new variant of ES is
tested in two subsurface characterization problems with or without Gaussian
assumptions. Results indicate that ES$_\text{(DL)}$ can produce similar (in the
Gaussian case) or even better (in the non-Gaussian case) results compared to
those from ES$_\text{(K)}$. The success of ES$_\text{(DL)}$ comes from the
power of DL in extracting complex (including non-Gaussian) features and
learning nonlinear relationships from massive amounts of training data.
Although in this work we only apply the ES$_\text{(DL)}$ method in parameter
estimation problems, the proposed idea can be conveniently extended to analysis
of model structural uncertainty and state estimation in real-time forecasting
studies.
Related papers
- Self-Ensembling Gaussian Splatting for Few-Shot Novel View Synthesis [55.561961365113554]
3D Gaussian Splatting (3DGS) has demonstrated remarkable effectiveness for novel view synthesis (NVS)
However, the 3DGS model tends to overfit when trained with sparse posed views, limiting its generalization ability to novel views.
We present a Self-Ensembling Gaussian Splatting (SE-GS) approach to alleviate the overfitting problem.
Our approach improves NVS quality with few-shot training views, outperforming existing state-of-the-art methods.
arXiv Detail & Related papers (2024-10-31T18:43:48Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - A Specialized Semismooth Newton Method for Kernel-Based Optimal
Transport [92.96250725599958]
Kernel-based optimal transport (OT) estimators offer an alternative, functional estimation procedure to address OT problems from samples.
We show that our SSN method achieves a global convergence rate of $O (1/sqrtk)$, and a local quadratic convergence rate under standard regularity conditions.
arXiv Detail & Related papers (2023-10-21T18:48:45Z) - Easy Differentially Private Linear Regression [16.325734286930764]
We study an algorithm which uses the exponential mechanism to select a model with high Tukey depth from a collection of non-private regression models.
We find that this algorithm obtains strong empirical performance in the data-rich setting.
arXiv Detail & Related papers (2022-08-15T17:42:27Z) - uGLAD: Sparse graph recovery by optimizing deep unrolled networks [11.48281545083889]
We present a novel technique to perform sparse graph recovery by optimizing deep unrolled networks.
Our model, uGLAD, builds upon and extends the state-of-the-art model GLAD to the unsupervised setting.
We evaluate model results on synthetic Gaussian data, non-Gaussian data generated from Gene Regulatory Networks, and present a case study in anaerobic digestion.
arXiv Detail & Related papers (2022-05-23T20:20:27Z) - Learning Mixtures of Linear Dynamical Systems [94.49754087817931]
We develop a two-stage meta-algorithm to efficiently recover each ground-truth LDS model up to error $tildeO(sqrtd/T)$.
We validate our theoretical studies with numerical experiments, confirming the efficacy of the proposed algorithm.
arXiv Detail & Related papers (2022-01-26T22:26:01Z) - Near-optimal Offline and Streaming Algorithms for Learning Non-Linear
Dynamical Systems [45.17023170054112]
We consider the setting of vector valued non-linear dynamical systems $X_t+1 = phi(A* X_t) + eta_t$, where $eta_t$ is unbiased noise and $phi : mathbbR to mathbbR$ is a known link function that satisfies certain em expansivity property.
arXiv Detail & Related papers (2021-05-24T22:14:26Z) - Learning to extrapolate using continued fractions: Predicting the
critical temperature of superconductor materials [5.905364646955811]
In the field of Artificial Intelligence (AI) and Machine Learning (ML), the approximation of unknown target functions $y=f(mathbfx)$ is a common objective.
We refer to $S$ as the training set and aim to identify a low-complexity mathematical model that can effectively approximate this target function for new instances $mathbfx$.
arXiv Detail & Related papers (2020-11-27T04:57:40Z) - Estimating Stochastic Linear Combination of Non-linear Regressions
Efficiently and Scalably [23.372021234032363]
We show that when the sub-sample sizes are large then the estimation errors will be sacrificed by too much.
To the best of our knowledge, this is the first work that and guarantees for the lineartext+Stochasticity model.
arXiv Detail & Related papers (2020-10-19T07:15:38Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.