Transfer Learning Enhanced Full Waveform Inversion
- URL: http://arxiv.org/abs/2302.11259v2
- Date: Fri, 1 Dec 2023 13:22:56 GMT
- Title: Transfer Learning Enhanced Full Waveform Inversion
- Authors: Stefan Kollmannsberger, Divya Singh and Leon Herrmann
- Abstract summary: We propose a way to favorably employ neural networks in the field of non-destructive testing using Full Waveform Inversion (FWI)
The presented methodology discretizes the unknown material distribution in the domain with a neural network within an adjoint optimization.
- Score: 2.3020018305241337
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a way to favorably employ neural networks in the field of
non-destructive testing using Full Waveform Inversion (FWI). The presented
methodology discretizes the unknown material distribution in the domain with a
neural network within an adjoint optimization. To further increase efficiency
of the FWI, pretrained neural networks are used to provide a good starting
point for the inversion. This reduces the number of iterations in the Full
Waveform Inversion for specific, yet generalizable settings.
Related papers
- Accelerating Full Waveform Inversion By Transfer Learning [1.0881446298284452]
Full waveform inversion (FWI) is a powerful tool for reconstructing material fields based on sparsely measured data obtained by wave propagation.
For specific problems, discretizing the material field with a neural network (NN) improves the robustness and reconstruction quality of the corresponding optimization problem.
In this paper, we introduce a novel transfer learning approach to further improve NN-based FWI.
arXiv Detail & Related papers (2024-08-01T16:39:06Z) - Deep Learning without Global Optimization by Random Fourier Neural Networks [0.0]
We introduce a new training algorithm for variety of deep neural networks that utilize random complex exponential activation functions.
Our approach employs a Markov Chain Monte Carlo sampling procedure to iteratively train network layers.
It consistently attains the theoretical approximation rate for residual networks with complex exponential activation functions.
arXiv Detail & Related papers (2024-07-16T16:23:40Z) - GaborPINN: Efficient physics informed neural networks using
multiplicative filtered networks [0.0]
Physics-informed neural networks (PINNs) provide functional wavefield solutions represented by neural networks (NNs)
We propose a modified PINN using multiplicative filtered networks, which embeds some of the known characteristics of the wavefield in training.
The proposed method achieves up to a two-magnitude increase in the speed of convergence as compared with conventional PINNs.
arXiv Detail & Related papers (2023-08-10T19:51:00Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Deep Convolutional Learning-Aided Detector for Generalized Frequency
Division Multiplexing with Index Modulation [0.0]
The proposed method first pre-processes the received signal by using a zero-forcing (ZF) detector and then uses a neural network consisting of a convolutional neural network (CNN) followed by a fully-connected neural network (FCNN)
The FCNN part uses only two fully-connected layers, which can be adapted to yield a trade-off between complexity and bit error rate (BER) performance.
It has been demonstrated that the proposed deep convolutional neural network-based detection and demodulation scheme provides better BER performance compared to ZF detector with a reasonable complexity increase.
arXiv Detail & Related papers (2022-02-06T22:18:42Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - Hyperparameter Optimization in Binary Communication Networks for
Neuromorphic Deployment [4.280642750854163]
Training neural networks for neuromorphic deployment is non-trivial.
We introduce a Bayesian approach for optimizing the hyper parameters of an algorithm for training binary communication networks that can be deployed to neuromorphic hardware.
We show that by optimizing the hyper parameters on this algorithm for each dataset, we can achieve improvements in accuracy over the previous state-of-the-art for this algorithm on each dataset.
arXiv Detail & Related papers (2020-04-21T01:15:45Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.