Conductivity Imaging from Internal Measurements with Mixed Least-Squares
Deep Neural Networks
- URL: http://arxiv.org/abs/2303.16454v3
- Date: Tue, 19 Dec 2023 14:27:21 GMT
- Title: Conductivity Imaging from Internal Measurements with Mixed Least-Squares
Deep Neural Networks
- Authors: Bangti Jin and Xiyao Li and Qimeng Quan and Zhi Zhou
- Abstract summary: We develop a novel approach using deep neural networks to reconstruct the conductivity distribution in elliptic problems.
We provide a thorough analysis of the deep neural network approximations of the conductivity for both continuous and empirical losses.
- Score: 4.228167013618626
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this work we develop a novel approach using deep neural networks to
reconstruct the conductivity distribution in elliptic problems from one
measurement of the solution over the whole domain. The approach is based on a
mixed reformulation of the governing equation and utilizes the standard
least-squares objective, with deep neural networks as ansatz functions to
approximate the conductivity and flux simultaneously. We provide a thorough
analysis of the deep neural network approximations of the conductivity for both
continuous and empirical losses, including rigorous error estimates that are
explicit in terms of the noise level, various penalty parameters and neural
network architectural parameters (depth, width and parameter bound). We also
provide multiple numerical experiments in two- and multi-dimensions to
illustrate distinct features of the approach, e.g., excellent stability with
respect to data noise and capability of solving high-dimensional problems.
Related papers
- A neural network approach for solving the Monge-Ampère equation with transport boundary condition [0.0]
This paper introduces a novel neural network-based approach to solving the Monge-Ampere equation with the transport boundary condition.
We leverage multilayer perceptron networks to learn approximate solutions by minimizing a loss function that encompasses the equation's residual, boundary conditions, and convexity constraints.
arXiv Detail & Related papers (2024-10-25T11:54:00Z) - Mean-field neural networks: learning mappings on Wasserstein space [0.0]
We study the machine learning task for models with operators mapping between the Wasserstein space of probability measures and a space of functions.
Two classes of neural networks are proposed to learn so-called mean-field functions.
We present different algorithms relying on mean-field neural networks for solving time-dependent mean-field problems.
arXiv Detail & Related papers (2022-10-27T05:11:42Z) - Imaging Conductivity from Current Density Magnitude using Neural
Networks [1.8692254863855962]
We develop a neural network based reconstruction technique for imaging the conductivity from the magnitude of the internal current density.
It is observed that the approach enjoys remarkable robustness with respect to the presence of data noise.
arXiv Detail & Related papers (2022-04-05T18:31:03Z) - A scalable multi-step least squares method for network identification
with unknown disturbance topology [0.0]
We present an identification method for dynamic networks with known network topology.
We use a multi-step Sequential and Null Space Fitting method to deal with reduced rank noise.
We provide a consistency proof that includes explicit-based Box model structure informativity.
arXiv Detail & Related papers (2021-06-14T16:12:49Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - Multi-fidelity Bayesian Neural Networks: Algorithms and Applications [0.0]
We propose a new class of Bayesian neural networks (BNNs) that can be trained using noisy data of variable fidelity.
We apply them to learn function approximations as well as to solve inverse problems based on partial differential equations (PDEs)
arXiv Detail & Related papers (2020-12-19T02:03:53Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.