Invertible Neural Networks versus MCMC for Posterior Reconstruction in
Grazing Incidence X-Ray Fluorescence
- URL: http://arxiv.org/abs/2102.03189v1
- Date: Fri, 5 Feb 2021 14:17:59 GMT
- Title: Invertible Neural Networks versus MCMC for Posterior Reconstruction in
Grazing Incidence X-Ray Fluorescence
- Authors: Anna Andrle, Nando Farchmin, Paul Hagemann, Sebastian Heidenreich,
Victor Soltwisch, Gabriele Steidl
- Abstract summary: We propose to reconstruct the posterior parameter distribution given a noisy measurement generated by the forward model by an appropriately learned invertible neural network.
We demonstrate by numerical comparisons that our method can compete with established Markov Chain Monte Carlo approaches, while being more efficient and flexible in applications.
- Score: 0.3232625980782302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Grazing incidence X-ray fluorescence is a non-destructive technique for
analyzing the geometry and compositional parameters of nanostructures appearing
e.g. in computer chips. In this paper, we propose to reconstruct the posterior
parameter distribution given a noisy measurement generated by the forward model
by an appropriately learned invertible neural network. This network resembles
the transport map from a reference distribution to the posterior. We
demonstrate by numerical comparisons that our method can compete with
established Markov Chain Monte Carlo approaches, while being more efficient and
flexible in applications.
Related papers
- A Bayesian Take on Gaussian Process Networks [1.7188280334580197]
This work implements Monte Carlo and Markov Chain Monte Carlo methods to sample from the posterior distribution of network structures.
We show that our method outperforms state-of-the-art algorithms in recovering the graphical structure of the network.
arXiv Detail & Related papers (2023-06-20T08:38:31Z) - Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - TetCNN: Convolutional Neural Networks on Tetrahedral Meshes [2.952111139469156]
Convolutional neural networks (CNN) have been broadly studied on images, videos, graphs, and triangular meshes.
We introduce a novel interpretable graph CNN framework for the tetrahedral mesh structure.
Inspired by ChebyNet, our model exploits the volumetric Laplace-Beltrami Operator (LBO) to define filters over commonly used graph Laplacian.
arXiv Detail & Related papers (2023-02-08T01:52:48Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Spherical convolutional neural networks can improve brain microstructure
estimation from diffusion MRI data [0.35998666903987897]
Diffusion magnetic resonance imaging is sensitive to the microstructural properties of brain tissue.
Estimate clinically and scientifically relevant microstructural properties from the measured signals remains a highly challenging inverse problem that machine learning may help solve.
We trained a spherical convolutional neural network to predict the ground-truth parameter values from efficiently simulated noisy data.
arXiv Detail & Related papers (2022-11-17T20:52:00Z) - Learnable Filters for Geometric Scattering Modules [64.03877398967282]
We propose a new graph neural network (GNN) module based on relaxations of recently proposed geometric scattering transforms.
Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations.
arXiv Detail & Related papers (2022-08-15T22:30:07Z) - Quiver neural networks [5.076419064097734]
We develop a uniform theoretical approach towards the analysis of various neural network connectivity architectures.
Inspired by quiver representation theory in mathematics, this approach gives a compact way to capture elaborate data flows.
arXiv Detail & Related papers (2022-07-26T09:42:45Z) - Bayesian Structure Learning with Generative Flow Networks [85.84396514570373]
In Bayesian structure learning, we are interested in inferring a distribution over the directed acyclic graph (DAG) from data.
Recently, a class of probabilistic models, called Generative Flow Networks (GFlowNets), have been introduced as a general framework for generative modeling.
We show that our approach, called DAG-GFlowNet, provides an accurate approximation of the posterior over DAGs.
arXiv Detail & Related papers (2022-02-28T15:53:10Z) - Equivalence in Deep Neural Networks via Conjugate Matrix Ensembles [0.0]
A numerical approach is developed for detecting the equivalence of deep learning architectures.
The empirical evidence supports the it phenomenon that difference between spectral densities of neural architectures and corresponding it conjugate circular ensemble are vanishing.
arXiv Detail & Related papers (2020-06-14T12:34:13Z) - Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric
graphs [81.12344211998635]
A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs)
We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels.
Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods.
arXiv Detail & Related papers (2020-03-11T17:21:15Z) - Understanding Graph Neural Networks with Generalized Geometric
Scattering Transforms [67.88675386638043]
The scattering transform is a multilayered wavelet-based deep learning architecture that acts as a model of convolutional neural networks.
We introduce windowed and non-windowed geometric scattering transforms for graphs based upon a very general class of asymmetric wavelets.
We show that these asymmetric graph scattering transforms have many of the same theoretical guarantees as their symmetric counterparts.
arXiv Detail & Related papers (2019-11-14T17:23:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.