A Helmholtz equation solver using unsupervised learning: Application to
transcranial ultrasound
- URL: http://arxiv.org/abs/2010.15761v2
- Date: Fri, 18 Jun 2021 10:19:28 GMT
- Title: A Helmholtz equation solver using unsupervised learning: Application to
transcranial ultrasound
- Authors: Antonio Stanziola, Simon R. Arridge, Ben T. Cox, Bradley E. Treeby
- Abstract summary: A fast iterative solver for the heterogeneous equation in 2D is developed using a fully-learned generalization.
The learned solution shows excellent performance on the test set, and is capable of well outside the training examples.
- Score: 1.7420604693654884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transcranial ultrasound therapy is increasingly used for the non-invasive
treatment of brain disorders. However, conventional numerical wave solvers are
currently too computationally expensive to be used online during treatments to
predict the acoustic field passing through the skull (e.g., to account for
subject-specific dose and targeting variations). As a step towards real-time
predictions, in the current work, a fast iterative solver for the heterogeneous
Helmholtz equation in 2D is developed using a fully-learned optimizer. The
lightweight network architecture is based on a modified UNet that includes a
learned hidden state. The network is trained using a physics-based loss
function and a set of idealized sound speed distributions with fully
unsupervised training (no knowledge of the true solution is required). The
learned optimizer shows excellent performance on the test set, and is capable
of generalization well outside the training examples, including to much larger
computational domains, and more complex source and sound speed distributions,
for example, those derived from x-ray computed tomography images of the skull.
Related papers
- Physics-guided Full Waveform Inversion using Encoder-Solver Convolutional Neural Networks [7.56372030029358]
Full Waveform Inversion (FWI) is an inverse problem for estimating the wave velocity distribution in a given domain.
We develop a learning process of an encoder-solver preconditioner that is based on convolutional neural networks.
We demonstrate our approach to solving FWI problems using 2D geophysical models with high-frequency data.
arXiv Detail & Related papers (2024-05-27T23:03:21Z) - Preconditioners for the Stochastic Training of Implicit Neural
Representations [30.92757082348805]
Implicit neural representations have emerged as a powerful technique for encoding complex continuous multidimensional signals as neural networks.
We propose training using diagonal preconditioners, showcasing their effectiveness across various signal modalities.
arXiv Detail & Related papers (2024-02-13T20:46:37Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Refining neural network predictions using background knowledge [68.35246878394702]
We show we can use logical background knowledge in learning system to compensate for a lack of labeled training data.
We introduce differentiable refinement functions that find a corrected prediction close to the original prediction.
This algorithm finds optimal refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not.
arXiv Detail & Related papers (2022-06-10T10:17:59Z) - A memory-efficient neural ODE framework based on high-level adjoint
differentiation [4.063868707697316]
We present a new neural ODE framework, PNODE, based on high-level discrete algorithmic differentiation.
We show that PNODE achieves the highest memory efficiency when compared with other reverse-accurate methods.
arXiv Detail & Related papers (2022-06-02T20:46:26Z) - Unsupervised Reservoir Computing for Solving Ordinary Differential
Equations [1.6371837018687636]
unsupervised reservoir computing (RC), an echo-state recurrent neural network capable of discovering approximate solutions that satisfy ordinary differential equations (ODEs)
We use Bayesian optimization to efficiently discover optimal sets in a high-dimensional hyper- parameter space and numerically show that one set is robust and can be used to solve an ODE for different initial conditions and time ranges.
arXiv Detail & Related papers (2021-08-25T18:16:42Z) - Seismic wave propagation and inversion with Neural Operators [7.296366040398878]
We develop a prototype framework for learning general solutions using a recently developed machine learning paradigm called Neural Operator.
A trained Neural Operator can compute a solution in negligible time for any velocity structure or source location.
We illustrate the method with the 2D acoustic wave equation and demonstrate the method's applicability to seismic tomography.
arXiv Detail & Related papers (2021-08-11T19:17:39Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - AutoInt: Automatic Integration for Fast Neural Volume Rendering [51.46232518888791]
We propose a new framework for learning efficient, closed-form solutions to integrals using implicit neural representation networks.
We demonstrate a greater than 10x improvement in photorealistic requirements, enabling fast neural volume rendering.
arXiv Detail & Related papers (2020-12-03T05:46:10Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.