Neural Born Iteration Method For Solving Inverse Scattering Problems: 2D
Cases
- URL: http://arxiv.org/abs/2112.09831v2
- Date: Tue, 21 Nov 2023 14:20:25 GMT
- Title: Neural Born Iteration Method For Solving Inverse Scattering Problems: 2D
Cases
- Authors: Tao Shan, Zhichao Lin, Xiaoqian Song, Maokun Li, Fan Yang, and
Zhensheng Xu
- Abstract summary: We propose the neural Born iterative method (Neural BIM) for solving 2D inverse scattering problems (ISPs)
Neural BIM employs independent convolutional neural networks (CNNs) to learn the alternate update rules of two different candidate solutions regarding the residuals.
Two different schemes are presented in this paper, including the supervised and unsupervised learning schemes.
- Score: 3.795881624409311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose the neural Born iterative method (NeuralBIM) for
solving 2D inverse scattering problems (ISPs) by drawing on the scheme of
physics-informed supervised residual learning (PhiSRL) to emulate the computing
process of the traditional Born iterative method (TBIM). NeuralBIM employs
independent convolutional neural networks (CNNs) to learn the alternate update
rules of two different candidate solutions regarding the residuals. Two
different schemes are presented in this paper, including the supervised and
unsupervised learning schemes. With the data set generated by the method of
moments (MoM), supervised NeuralBIM are trained with the knowledge of total
fields and contrasts. Unsupervised NeuralBIM is guided by the physics-embedded
objective function founding on the governing equations of ISPs, which results
in no requirement of total fields and contrasts for training. Numerical and
experimental results further validate the efficacy of NeuralBIM.
Related papers
- Error Analysis and Numerical Algorithm for PDE Approximation with Hidden-Layer Concatenated Physics Informed Neural Networks [0.9693477883827689]
We present the hidden-layerd physics informed neural network (HLConcPINN) method.
It combines hidden-layerd feed-forward neural networks, a modified block time marching strategy, and a physics informed approach for approximating partial differential equations (PDEs)
We show that its approximation error of the solution can be effectively controlled by the training loss for dynamic simulations with long time horizons.
arXiv Detail & Related papers (2024-06-10T15:12:53Z) - Chebyshev Spectral Neural Networks for Solving Partial Differential Equations [0.0]
The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function.
The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method.
arXiv Detail & Related papers (2024-06-06T05:31:45Z) - Feed-Forward Neural Networks as a Mixed-Integer Program [0.0]
The research focuses on training and evaluating proposed approaches through experiments on handwritten digit classification models.
The study assesses the performance of trained ReLU NNs, shedding light on the effectiveness of MIP formulations in enhancing training processes for NNs.
arXiv Detail & Related papers (2024-02-09T02:23:37Z) - Splitting physics-informed neural networks for inferring the dynamics of
integer- and fractional-order neuron models [0.0]
We introduce a new approach for solving forward systems of differential equations using a combination of splitting methods and physics-informed neural networks (PINNs)
The proposed method, splitting PINN, effectively addresses the challenge of applying PINNs to forward dynamical systems.
arXiv Detail & Related papers (2023-04-26T00:11:00Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Spiking Neural Networks -- Part II: Detecting Spatio-Temporal Patterns [38.518936229794214]
Spiking Neural Networks (SNNs) have the unique ability to detect information in encoded-temporal signals.
We review models and training algorithms for the dominant approach that considers SNNs as a Recurrent Neural Network (RNN)
We describe an alternative approach that relies on probabilistic models for spiking neurons, allowing the derivation of local learning rules via gradient estimates.
arXiv Detail & Related papers (2020-10-27T11:47:42Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.