A DeepONet for inverting the Neumann-to-Dirichlet Operator in Electrical Impedance Tomography: An approximation theoretic perspective and numerical results
- URL: http://arxiv.org/abs/2407.17182v3
- Date: Tue, 15 Apr 2025 02:32:00 GMT
- Title: A DeepONet for inverting the Neumann-to-Dirichlet Operator in Electrical Impedance Tomography: An approximation theoretic perspective and numerical results
- Authors: Anuj Abhishek, Thilo Strauss,
- Abstract summary: In this work, we consider the non-invasive medical imaging modality of Electrical Impedance Tomography.<n>The problem is to recover the conductivity in a medium from a set of data that arises out of a current-to-voltage map.<n>We formulate this inverse problem as an operator-learning problem where the goal is to learn the implicitly defined operator-to-function map.
- Score: 2.209921757303168
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we consider the non-invasive medical imaging modality of Electrical Impedance Tomography, where the problem is to recover the conductivity in a medium from a set of data that arises out of a current-to-voltage map (Neumann-to-Dirichlet operator) defined on the boundary of the medium. We formulate this inverse problem as an operator-learning problem where the goal is to learn the implicitly defined operator-to-function map between the space of Neumann-to-Dirichlet operators to the space of admissible conductivities. Subsequently, we use an operator-learning architecture, popularly called DeepONets, to learn this operator-to-function map. Thus far, most of the operator learning architectures have been implemented to learn operators between function spaces. In this work, we generalize the earlier works and use a DeepONet to actually {learn an operator-to-function} map. We provide a Universal Approximation Theorem type result which guarantees that this implicitly defined operator-to-function map between the space of Neumann-to-Dirichlet operator to the space of conductivity function can be approximated to an arbitrary degree using such a DeepONet. Furthermore, we provide a computational implementation of our proposed approach and compare it against a standard baseline. We show that the proposed approach achieves good reconstructions and outperforms the baseline method in our experiments.
Related papers
- Geometric Laplace Neural Operator [12.869633759181417]
We propose a generalized operator learning framework based on a pole-residue decomposition enriched with exponential basis functions.<n>We introduce the Geometric Laplace Neural Operator (GLNO), which embeds the Laplace spectral representation into the eigen-basis of the Laplace-Beltrami operator.<n>We further design a grid-invariant network architecture (GLNONet) that realizes GLNO in practice.
arXiv Detail & Related papers (2025-12-18T11:07:41Z) - Extension and neural operator approximation of the electrical impedance tomography inverse map [15.065275438824008]
This paper considers the problem of noise-robust neural operator approximation for the solution map of Caldern's inverse conductivity problem.<n>The boundary measurements are realized as a noisy perturbation of the Neumann-to-Dirichlet map's integral kernel.<n>The resulting extension shares the same stability properties as the original inverse map from kernels to conductivities, but is now amenable to neural operator approximation.
arXiv Detail & Related papers (2025-11-25T14:43:13Z) - Efficient Parametric SVD of Koopman Operator for Stochastic Dynamical Systems [51.54065545849027]
The Koopman operator provides a principled framework for analyzing nonlinear dynamical systems.<n>VAMPnet and DPNet have been proposed to learn the leading singular subspaces of the Koopman operator.<n>We propose a scalable and conceptually simple method for learning the top-$k$ singular functions of the Koopman operator.
arXiv Detail & Related papers (2025-07-09T18:55:48Z) - Nonparametric Sparse Online Learning of the Koopman Operator [11.710740395697128]
The Koopman operator provides a powerful framework for representing the dynamics of general nonlinear dynamical systems.
Data-driven techniques to learn the Koopman operator typically assume that the chosen function space is closed under system dynamics.
We present an operator approximation algorithm to learn the Koopman operator iteratively with control over the complexity of the representation.
arXiv Detail & Related papers (2025-01-27T20:48:10Z) - Operator Learning of Lipschitz Operators: An Information-Theoretic Perspective [2.375038919274297]
This work addresses the complexity of neural operator approximations for the general class of Lipschitz continuous operators.
Our main contribution establishes lower bounds on the metric entropy of Lipschitz operators in two approximation settings.
It is shown that, regardless of the activation function used, neural operator architectures attaining an approximation accuracy $epsilon$ must have a size that is exponentially large in $epsilon-1$.
arXiv Detail & Related papers (2024-06-26T23:36:46Z) - DeltaPhi: Learning Physical Trajectory Residual for PDE Solving [54.13671100638092]
We propose and formulate the Physical Trajectory Residual Learning (DeltaPhi)
We learn the surrogate model for the residual operator mapping based on existing neural operator networks.
We conclude that, compared to direct learning, physical residual learning is preferred for PDE solving.
arXiv Detail & Related papers (2024-06-14T07:45:07Z) - Nonparametric Sparse Online Learning of the Koopman Operator [11.710740395697128]
The Koopman operator provides a powerful framework for representing the dynamics of general nonlinear dynamical systems.
Data-driven techniques to learn the Koopman operator typically assume that the chosen function space is closed under system dynamics.
We present an operator approximation algorithm to learn the Koopman operator iteratively with control over the complexity of the representation.
arXiv Detail & Related papers (2024-05-13T02:18:49Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - Operator Learning: Algorithms and Analysis [8.305111048568737]
Operator learning refers to the application of ideas from machine learning to approximate operators mapping between Banach spaces of functions.
This review focuses on neural operators, built on the success of deep neural networks in the approximation of functions defined on finite dimensional Euclidean spaces.
arXiv Detail & Related papers (2024-02-24T04:40:27Z) - Parameterized Projected Bellman Operator [64.129598593852]
Approximate value iteration (AVI) is a family of algorithms for reinforcement learning (RL)
We propose a novel alternative approach based on learning an approximate version of the Bellman operator.
We formulate an optimization problem to learn PBO for generic sequential decision-making problems.
arXiv Detail & Related papers (2023-12-20T09:33:16Z) - The Parametric Complexity of Operator Learning [6.800286371280922]
This paper aims to prove that for general classes of operators which are characterized only by their $Cr$- or Lipschitz-regularity, operator learning suffers from a curse of parametric complexity''
The second contribution of the paper is to prove that this general curse can be overcome for solution operators defined by the Hamilton-Jacobi equation.
A novel neural operator architecture is introduced, termed HJ-Net, which explicitly takes into account characteristic information of the underlying Hamiltonian system.
arXiv Detail & Related papers (2023-06-28T05:02:03Z) - Semi-supervised Invertible DeepONets for Bayesian Inverse Problems [8.594140167290098]
DeepONets offer a powerful, data-driven tool for solving parametric PDEs by learning operators.
In this work, we employ physics-informed DeepONets in the context of high-dimensional, Bayesian inverse problems.
arXiv Detail & Related papers (2022-09-06T18:55:06Z) - Learning Dynamical Systems via Koopman Operator Regression in
Reproducing Kernel Hilbert Spaces [52.35063796758121]
We formalize a framework to learn the Koopman operator from finite data trajectories of the dynamical system.
We link the risk with the estimation of the spectral decomposition of the Koopman operator.
Our results suggest RRR might be beneficial over other widely used estimators.
arXiv Detail & Related papers (2022-05-27T14:57:48Z) - Improved architectures and training algorithms for deep operator
networks [0.0]
Operator learning techniques have emerged as a powerful tool for learning maps between infinite-dimensional Banach spaces.
We analyze the training dynamics of deep operator networks (DeepONets) through the lens of Neural Tangent Kernel (NTK) theory.
arXiv Detail & Related papers (2021-10-04T18:34:41Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - Kernel-based approximation of the Koopman generator and Schr\"odinger
operator [0.3093890460224435]
We show how eigenfunctions can be estimated by solving auxiliary matrix eigenvalue problems.
The resulting algorithms are applied to molecular dynamics and quantum chemistry examples.
arXiv Detail & Related papers (2020-05-27T08:23:29Z) - Self-Organized Operational Neural Networks with Generative Neurons [87.32169414230822]
ONNs are heterogenous networks with a generalized neuron model that can encapsulate any set of non-linear operators.
We propose Self-organized ONNs (Self-ONNs) with generative neurons that have the ability to adapt (optimize) the nodal operator of each connection.
arXiv Detail & Related papers (2020-04-24T14:37:56Z) - Differentiable Top-k Operator with Optimal Transport [135.36099648554054]
The SOFT top-k operator approximates the output of the top-k operation as the solution of an Entropic Optimal Transport (EOT) problem.
We apply the proposed operator to the k-nearest neighbors and beam search algorithms, and demonstrate improved performance.
arXiv Detail & Related papers (2020-02-16T04:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.