Robust Simulation-Based Inference in Cosmology with Bayesian Neural
Networks
- URL: http://arxiv.org/abs/2207.08435v2
- Date: Wed, 20 Jul 2022 13:02:19 GMT
- Title: Robust Simulation-Based Inference in Cosmology with Bayesian Neural
Networks
- Authors: Pablo Lemos, Miles Cranmer, Muntazir Abidi, ChangHoon Hahn, Michael
Eickenberg, Elena Massara, David Yallup, Shirley Ho
- Abstract summary: We show how using a Bayesian network framework for training SBI can mitigate biases and result in more reliable inference outside the training set.
SWAG is the first application of Weight Averaging to cosmology and apply it to SBI trained for inference on the microwave background.
- Score: 3.497773679350512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulation-based inference (SBI) is rapidly establishing itself as a standard
machine learning technique for analyzing data in cosmological surveys. Despite
continual improvements to the quality of density estimation by learned models,
applications of such techniques to real data are entirely reliant on the
generalization power of neural networks far outside the training distribution,
which is mostly unconstrained. Due to the imperfections in scientist-created
simulations, and the large computational expense of generating all possible
parameter combinations, SBI methods in cosmology are vulnerable to such
generalization issues. Here, we discuss the effects of both issues, and show
how using a Bayesian neural network framework for training SBI can mitigate
biases, and result in more reliable inference outside the training set. We
introduce cosmoSWAG, the first application of Stochastic Weight Averaging to
cosmology, and apply it to SBI trained for inference on the cosmic microwave
background.
Related papers
- A Comprehensive Guide to Simulation-based Inference in Computational Biology [5.333122501732079]
This paper provides comprehensive guidelines for deciding between SBI approaches for complex biological models.
We apply the guidelines to two agent-based models that describe cellular dynamics using real-world data.
Our study unveils a critical insight: while neural SBI methods demand significantly fewer simulations for inference results, they tend to yield biased estimations.
arXiv Detail & Related papers (2024-09-29T12:04:03Z) - Fast and Reliable Probabilistic Reflectometry Inversion with Prior-Amortized Neural Posterior Estimation [73.81105275628751]
Finding all structures compatible with reflectometry data is computationally prohibitive for standard algorithms.
We address this lack of reliability with a probabilistic deep learning method that identifies all realistic structures in seconds.
Our method, Prior-Amortized Neural Posterior Estimation (PANPE), combines simulation-based inference with novel adaptive priors.
arXiv Detail & Related papers (2024-07-26T10:29:16Z) - Knowledge-Based Convolutional Neural Network for the Simulation and Prediction of Two-Phase Darcy Flows [3.5707423185282656]
Physics-informed neural networks (PINNs) have gained significant prominence as a powerful tool in the field of scientific computing and simulations.
We propose to combine the power of neural networks with the dynamics imposed by the discretized differential equations.
By discretizing the governing equations, the PINN learns to account for the discontinuities and accurately capture the underlying relationships between inputs and outputs.
arXiv Detail & Related papers (2024-04-04T06:56:32Z) - Persistence-based operators in machine learning [62.997667081978825]
We introduce a class of persistence-based neural network layers.
Persistence-based layers allow the users to easily inject knowledge about symmetries respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
arXiv Detail & Related papers (2022-12-28T18:03:41Z) - An Information-Theoretic Framework for Supervised Learning [22.280001450122175]
We propose a novel information-theoretic framework with its own notions of regret and sample complexity.
We study the sample complexity of learning from data generated by deep neural networks with ReLU activation units.
We conclude by corroborating our theoretical results with experimental analysis of random single-hidden-layer neural networks.
arXiv Detail & Related papers (2022-03-01T05:58:28Z) - Constraining cosmological parameters from N-body simulations with
Bayesian Neural Networks [0.0]
We use The Quijote simulations in order to extract the cosmological parameters through Bayesian Neural Networks.
This kind of model has a remarkable ability to estimate the associated uncertainty, which is one of the ultimate goals in the precision cosmology era.
arXiv Detail & Related papers (2021-12-22T13:22:30Z) - FEM-based Real-Time Simulations of Large Deformations with Probabilistic
Deep Learning [1.2617078020344616]
We propose a highly efficient deep-learning surrogate framework that is able to predict the response of hyper-elastic bodies under load.
The framework takes the form of special convolutional neural network architecture, so-called U-Net, which is trained with force-displacement data.
arXiv Detail & Related papers (2021-11-02T20:05:22Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Multi-Sample Online Learning for Probabilistic Spiking Neural Networks [43.8805663900608]
Spiking Neural Networks (SNNs) capture some of the efficiency of biological brains for inference and learning.
This paper introduces an online learning rule based on generalized expectation-maximization (GEM)
Experimental results on structured output memorization and classification on a standard neuromorphic data set demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration.
arXiv Detail & Related papers (2020-07-23T10:03:58Z) - DessiLBI: Exploring Structural Sparsity of Deep Networks via
Differential Inclusion Paths [45.947140164621096]
We propose a new approach based on differential inclusions of inverse scale spaces.
We show that DessiLBI unveils "winning tickets" in early epochs.
arXiv Detail & Related papers (2020-07-04T04:40:16Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.