Dirichlet-Neumann learning algorithm for solving elliptic interface
problems
- URL: http://arxiv.org/abs/2301.07361v2
- Date: Wed, 17 May 2023 10:33:49 GMT
- Title: Dirichlet-Neumann learning algorithm for solving elliptic interface
problems
- Authors: Qi Sun, Xuejun Xu, and Haotian Yi
- Abstract summary: Dirichlet-Neumann learning algorithm is proposed in this work to solve the benchmark elliptic interface problem with high-contrast coefficients as well as irregular interfaces.
We carry out a rigorous error analysis to evaluate the discrepancy caused by the boundary penalty treatment for each subproblem.
The effectiveness and robustness of our proposed methods are demonstrated experimentally through a series of elliptic interface problems.
- Score: 7.935690079593201
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-overlapping domain decomposition methods are natural for solving
interface problems arising from various disciplines, however, the numerical
simulation requires technical analysis and is often available only with the use
of high-quality grids, thereby impeding their use in more complicated
situations. To remove the burden of mesh generation and to effectively tackle
with the interface jump conditions, a novel mesh-free scheme, i.e.,
Dirichlet-Neumann learning algorithm, is proposed in this work to solve the
benchmark elliptic interface problem with high-contrast coefficients as well as
irregular interfaces. By resorting to the variational principle, we carry out a
rigorous error analysis to evaluate the discrepancy caused by the boundary
penalty treatment for each decomposed subproblem, which paves the way for
realizing the Dirichlet-Neumann algorithm using neural network extension
operators. The effectiveness and robustness of our proposed methods are
demonstrated experimentally through a series of elliptic interface problems,
achieving better performance over other alternatives especially in the presence
of erroneous flux prediction at interface.
Related papers
- An Iterative Deep Ritz Method for Monotone Elliptic Problems [0.29792392019703945]
We present a novel iterative deep Ritz method (IDRM) for solving a general class of elliptic problems.
The algorithm is applicable to elliptic problems involving a monotone operator.
We establish a convergence rate for the method using tools from geometry of Banach spaces and theory of monotone operators.
arXiv Detail & Related papers (2025-01-25T11:50:24Z) - A neural network approach for solving the Monge-Ampère equation with transport boundary condition [0.0]
This paper introduces a novel neural network-based approach to solving the Monge-Ampere equation with the transport boundary condition.
We leverage multilayer perceptron networks to learn approximate solutions by minimizing a loss function that encompasses the equation's residual, boundary conditions, and convexity constraints.
arXiv Detail & Related papers (2024-10-25T11:54:00Z) - Physics-Informed Generator-Encoder Adversarial Networks with Latent
Space Matching for Stochastic Differential Equations [14.999611448900822]
We propose a new class of physics-informed neural networks to address the challenges posed by forward, inverse, and mixed problems in differential equations.
Our model consists of two key components: the generator and the encoder, both updated alternately by gradient descent.
In contrast to previous approaches, we employ an indirect matching that operates within the lower-dimensional latent feature space.
arXiv Detail & Related papers (2023-11-03T04:29:49Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - A hybrid neural-network and finite-difference method for solving Poisson
equation with jump discontinuities on interfaces [0.0]
A new hybrid neural-network and finite-difference method is developed for solving Poisson equation in a regular domain with jump discontinuities on an embedded irregular interface.
The two- and three-dimensional numerical results show that the present hybrid method preserves second-order accuracy for the solution and its derivatives.
arXiv Detail & Related papers (2022-10-11T15:15:09Z) - Gradient Backpropagation Through Combinatorial Algorithms: Identity with
Projection Works [20.324159725851235]
A meaningful replacement for zero or undefined solvers is crucial for effective gradient-based learning.
We propose a principled approach to exploit the geometry of the discrete solution space to treat the solver as a negative identity on the backward pass.
arXiv Detail & Related papers (2022-05-30T16:17:09Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Local AdaGrad-Type Algorithm for Stochastic Convex-Concave Minimax
Problems [80.46370778277186]
Large scale convex-concave minimax problems arise in numerous applications, including game theory, robust training, and training of generative adversarial networks.
We develop a communication-efficient distributed extragrad algorithm, LocalAdaSient, with an adaptive learning rate suitable for solving convex-concave minimax problem in the.
Server model.
We demonstrate its efficacy through several experiments in both the homogeneous and heterogeneous settings.
arXiv Detail & Related papers (2021-06-18T09:42:05Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.