CPFI-EIT: A CNN-PINN Framework for Full-Inverse Electrical Impedance Tomography on Non-Smooth Conductivity Distributions
- URL: http://arxiv.org/abs/2412.17827v1
- Date: Tue, 10 Dec 2024 10:48:43 GMT
- Title: CPFI-EIT: A CNN-PINN Framework for Full-Inverse Electrical Impedance Tomography on Non-Smooth Conductivity Distributions
- Authors: Yang Xuanxuan, Zhang Yangming, Chen Haofeng, Ma Gang, Wang Xiaojie,
- Abstract summary: We introduce a hybrid learning framework that combines convolutional neural networks (CNNs) and physics-informed neural networks (PINNs)
EIT is a noninvasive imaging technique that reconstructs the spatial distribution of internal conductivity based on boundary voltage measurements from injected currents.
- Score: 0.0
- License:
- Abstract: This paper introduces a hybrid learning framework that combines convolutional neural networks (CNNs) and physics-informed neural networks (PINNs) to address the challenging problem of full-inverse electrical impedance tomography (EIT). EIT is a noninvasive imaging technique that reconstructs the spatial distribution of internal conductivity based on boundary voltage measurements from injected currents. This method has applications across medical imaging, multiphase flow detection, and tactile sensing. However, solving EIT involves a nonlinear partial differential equation (PDE) derived from Maxwell's equations, posing significant computational challenges as an ill-posed inverse problem. Existing PINN approaches primarily address semi-inverse EIT, assuming full access to internal potential data, which limits practical applications in realistic, full-inverse scenarios. Our framework employs a forward CNN-based supervised network to map differential boundary voltage measurements to a discrete potential distribution under fixed Neumann boundary conditions, while an inverse PINN-based unsupervised network enforces PDE constraints for conductivity reconstruction. Instead of traditional automatic differentiation, we introduce discrete numerical differentiation to bridge the forward and inverse networks, effectively decoupling them, enhancing modularity, and reducing computational demands. We validate our framework under realistic conditions, using a 16-electrode setup and rigorous testing on complex conductivity distributions with sharp boundaries, without Gaussian smoothing. This approach demonstrates robust flexibility and improved applicability in full-inverse EIT, establishing a practical solution for real-world imaging challenges.
Related papers
- Coupled Integral PINN for conservation law [1.9720482348156743]
The Physics-Informed Neural Network (PINN) is an innovative approach to solve a diverse array of partial differential equations.
This paper introduces a novel Coupled Integrated PINN methodology that involves fitting the integral solutions equations using additional neural networks.
arXiv Detail & Related papers (2024-11-18T04:32:42Z) - A Two-Stage Imaging Framework Combining CNN and Physics-Informed Neural Networks for Full-Inverse Tomography: A Case Study in Electrical Impedance Tomography (EIT) [5.772638266457322]
We propose a two-stage hybrid learning framework combining Convolutional Neural Networks (CNNs) and Physics-Informed Neural Networks (PINNs)
This framework integrates data-driven and model-driven approaches, combines supervised and unsupervised learning, and decouples the forward and inverse problems within the PINN framework in EIT.
arXiv Detail & Related papers (2024-07-25T02:48:22Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Investigating and Mitigating Failure Modes in Physics-informed Neural
Networks (PINNs) [0.0]
This paper explores the difficulties in solving partial differential equations (PDEs) using physics-informed neural networks (PINNs)
PINNs use physics as a regularization term in objective function. However, this approach is impractical in the absence of data or prior knowledge of the solution.
Our findings demonstrate that high-order PDEs contaminate backpropagated gradients and hinder convergence.
arXiv Detail & Related papers (2022-09-20T20:46:07Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Physics-informed attention-based neural network for solving non-linear
partial differential equations [6.103365780339364]
Physics-Informed Neural Networks (PINNs) have enabled significant improvements in modelling physical processes.
PINNs are based on simple architectures, and learn the behavior of complex physical systems by optimizing the network parameters to minimize the residual of the underlying PDE.
Here, we address the question of which network architectures are best suited to learn the complex behavior of non-linear PDEs.
arXiv Detail & Related papers (2021-05-17T14:29:08Z) - Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid
Flow Prediction [79.81193813215872]
We develop a hybrid (graph) neural network that combines a traditional graph convolutional network with an embedded differentiable fluid dynamics simulator inside the network itself.
We show that we can both generalize well to new situations and benefit from the substantial speedup of neural network CFD predictions.
arXiv Detail & Related papers (2020-07-08T21:23:19Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - A nonlocal physics-informed deep learning framework using the
peridynamic differential operator [0.0]
We develop a nonlocal PINN approach using the Peridynamic Differential Operator (PDDO)---a numerical method which incorporates long-range interactions and removes spatial derivatives in the governing equations.
Because the PDDO functions can be readily incorporated in the neural network architecture, the nonlocality does not degrade the performance of modern deep-learning algorithms.
We document the superior behavior of nonlocal PINN with respect to local PINN in both solution accuracy and parameter inference.
arXiv Detail & Related papers (2020-05-31T06:26:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.