E2N: Error Estimation Networks for Goal-Oriented Mesh Adaptation
- URL: http://arxiv.org/abs/2207.11233v1
- Date: Fri, 22 Jul 2022 17:41:37 GMT
- Title: E2N: Error Estimation Networks for Goal-Oriented Mesh Adaptation
- Authors: Joseph G. Wallwork, Jingyi Lu, Mingrui Zhang and Matthew D. Piggott
- Abstract summary: We develop a "data-driven" goal-oriented mesh adaptation approach with an appropriately configured and trained neural network.
An element-by-element construction is employed here, whereby local values of various parameters related to the mesh geometry are taken as inputs.
We demonstrate that this approach is able to obtain the same accuracy with a reduced computational cost.
- Score: 6.132664589282657
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Given a partial differential equation (PDE), goal-oriented error estimation
allows us to understand how errors in a diagnostic quantity of interest (QoI),
or goal, occur and accumulate in a numerical approximation, for example using
the finite element method. By decomposing the error estimates into
contributions from individual elements, it is possible to formulate adaptation
methods, which modify the mesh with the objective of minimising the resulting
QoI error. However, the standard error estimate formulation involves the true
adjoint solution, which is unknown in practice. As such, it is common practice
to approximate it with an 'enriched' approximation (e.g. in a higher order
space or on a refined mesh). Doing so generally results in a significant
increase in computational cost, which can be a bottleneck compromising the
competitiveness of (goal-oriented) adaptive simulations. The central idea of
this paper is to develop a "data-driven" goal-oriented mesh adaptation approach
through the selective replacement of the expensive error estimation step with
an appropriately configured and trained neural network. In doing so, the error
estimator may be obtained without even constructing the enriched spaces. An
element-by-element construction is employed here, whereby local values of
various parameters related to the mesh geometry and underlying problem physics
are taken as inputs, and the corresponding contribution to the error estimator
is taken as output. We demonstrate that this approach is able to obtain the
same accuracy with a reduced computational cost, for adaptive mesh test cases
related to flow around tidal turbines, which interact via their downstream
wakes, and where the overall power output of the farm is taken as the QoI.
Moreover, we demonstrate that the element-by-element approach implies
reasonably low training costs.
Related papers
- Semiparametric conformal prediction [79.6147286161434]
Risk-sensitive applications require well-calibrated prediction sets over multiple, potentially correlated target variables.
We treat the scores as random vectors and aim to construct the prediction set accounting for their joint correlation structure.
We report desired coverage and competitive efficiency on a range of real-world regression problems.
arXiv Detail & Related papers (2024-11-04T14:29:02Z) - Physics-Driven AI Correction in Laser Absorption Sensing Quantification [2.403858349180771]
Laser absorption spectroscopy (LAS) quantification is a popular tool used in measuring temperature and concentration of gases.
Current ML-based solutions cannot guarantee their measure reliability.
We propose a new framework, SPEC, to address this issue.
arXiv Detail & Related papers (2024-08-20T10:29:41Z) - Adaptive Multilevel Neural Networks for Parametric PDEs with Error Estimation [0.0]
A neural network architecture is presented to solve high-dimensional parameter-dependent partial differential equations (pPDEs)
It is constructed to map parameters of the model data to corresponding finite element solutions.
It outputs a coarse grid solution and a series of corrections as produced in an adaptive finite element method (AFEM)
arXiv Detail & Related papers (2024-03-19T11:34:40Z) - Adaptive operator learning for infinite-dimensional Bayesian inverse problems [7.716833952167609]
We develop an adaptive operator learning framework that can reduce modeling error gradually by forcing the surrogate to be accurate in local areas.
We present a rigorous convergence guarantee in the linear case using the UKI framework.
The numerical results show that our method can significantly reduce computational costs while maintaining inversion accuracy.
arXiv Detail & Related papers (2023-10-27T01:50:33Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic
Uncertainty [58.144520501201995]
Bi-Lipschitz regularization of neural network layers preserve relative distances between data instances in the feature spaces of each layer.
With the use of an attentive set encoder, we propose to meta learn either diagonal or diagonal plus low-rank factors to efficiently construct task specific covariance matrices.
We also propose an inference procedure which utilizes scaled energy to achieve a final predictive distribution.
arXiv Detail & Related papers (2021-10-12T22:04:19Z) - Evaluating State-of-the-Art Classification Models Against Bayes
Optimality [106.50867011164584]
We show that we can compute the exact Bayes error of generative models learned using normalizing flows.
We use our approach to conduct a thorough investigation of state-of-the-art classification models.
arXiv Detail & Related papers (2021-06-07T06:21:20Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Calibrated Adaptive Probabilistic ODE Solvers [31.442275669185626]
We introduce, discuss, and assess several probabilistically motivated ways to calibrate the uncertainty estimate.
We demonstrate the efficiency of the methodology by benchmarking against the classic, widely used Dormand-Prince 4/5 Runge-Kutta method.
arXiv Detail & Related papers (2020-12-15T10:48:55Z) - Self Normalizing Flows [65.73510214694987]
We propose a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer.
This reduces the computational complexity of each layer's exact update from $mathcalO(D3)$ to $mathcalO(D2)$.
We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts.
arXiv Detail & Related papers (2020-11-14T09:51:51Z) - Relative gradient optimization of the Jacobian term in unsupervised deep
learning [9.385902422987677]
Learning expressive probabilistic models correctly describing the data is a ubiquitous problem in machine learning.
Deep density models have been widely used for this task, but their maximum likelihood based training requires estimating the log-determinant of the Jacobian.
We propose a new approach for exact training of such neural networks.
arXiv Detail & Related papers (2020-06-26T16:41:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.