Isolated pulsar population synthesis with simulation-based inference
- URL: http://arxiv.org/abs/2312.14848v3
- Date: Wed, 5 Jun 2024 14:34:42 GMT
- Title: Isolated pulsar population synthesis with simulation-based inference
- Authors: Vanessa Graber, Michele Ronchi, Celsa Pardo-Araujo, Nanda Rea,
- Abstract summary: We develop a framework to model neutron star birth properties and their dynamical and magnetorotational evolution.
We then follow an SBI approach that focuses on neural posterior estimation and train deep neural networks to infer the parameters' posterior distributions.
Our approach represents a crucial step toward robust statistical inference for complex population synthesis frameworks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We combine pulsar population synthesis with simulation-based inference (SBI) to constrain the magnetorotational properties of isolated Galactic radio pulsars. We first develop a framework to model neutron star birth properties and their dynamical and magnetorotational evolution. We specifically sample initial magnetic field strengths, $B$, and spin periods, $P$, from lognormal distributions and capture the late-time magnetic field decay with a power law. Each lognormal is described by a mean, $\mu_{\log B}, \mu_{\log P}$, and standard deviation, $\sigma_{\log B}, \sigma_{\log P}$, while the power law is characterized by the index, $a_{\rm late}$. We subsequently model the stars' radio emission and observational biases to mimic detections with three radio surveys, and we produce a large database of synthetic $P$--$\dot{P}$ diagrams by varying our five magnetorotational input parameters. We then follow an SBI approach that focuses on neural posterior estimation and train deep neural networks to infer the parameters' posterior distributions. After successfully validating these individual neural density estimators on simulated data, we use an ensemble of networks to infer the posterior distributions for the observed pulsar population. We obtain $\mu_{\log B} = 13.10^{+0.08}_{-0.10}$, $\sigma_{\log B} = 0.45^{+0.05}_{-0.05}$ and $\mu_{\log P} = -1.00^{+0.26}_{-0.21}$, $\sigma_{\log P} = 0.38^{+0.33}_{-0.18}$ for the lognormal distributions and $a_{\rm late} = -1.80^{+0.65}_{-0.61}$ for the power law at the $95\%$ credible interval. We contrast our results with previous studies and highlight uncertainties of the inferred $a_{\rm late}$ value. Our approach represents a crucial step toward robust statistical inference for complex population synthesis frameworks and forms the basis for future multiwavelength analyses of Galactic pulsars.
Related papers
- Statistical Spatially Inhomogeneous Diffusion Inference [15.167120574781153]
Inferring a diffusion equation from discretely-observed measurements is a statistical challenge.
We propose neural network-based estimators of both the drift $boldsymbolb$ and the spatially-inhomogeneous diffusion tensor $D = SigmaSigmaT$.
arXiv Detail & Related papers (2023-12-10T06:52:50Z) - Kernel-, mean- and noise-marginalised Gaussian processes for exoplanet
transits and $H_0$ inference [0.0]
Kernel recovery and mean function inference were explored on synthetic data from exoplanet transit light curve simulations.
The method was extended to marginalisation over mean functions and noise models.
The kernel posterior of the cosmic chronometers dataset prefers a non-stationary linear kernel.
arXiv Detail & Related papers (2023-11-07T17:31:01Z) - Effective Minkowski Dimension of Deep Nonparametric Regression: Function
Approximation and Statistical Theories [70.90012822736988]
Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to intrinsic data structures.
This paper introduces a relaxed assumption that input data are concentrated around a subset of $mathbbRd$ denoted by $mathcalS$, and the intrinsic dimension $mathcalS$ can be characterized by a new complexity notation -- effective Minkowski dimension.
arXiv Detail & Related papers (2023-06-26T17:13:31Z) - Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative
Models [49.81937966106691]
We develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models.
In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach.
arXiv Detail & Related papers (2023-06-15T16:30:08Z) - Neural Inference of Gaussian Processes for Time Series Data of Quasars [72.79083473275742]
We introduce a new model that enables it to describe quasar spectra completely.
We also introduce a new method of inference of Gaussian process parameters, which we call $textitNeural Inference$.
The combination of both the CDRW model and Neural Inference significantly outperforms the baseline DRW and MLE.
arXiv Detail & Related papers (2022-11-17T13:01:26Z) - Structure Learning in Graphical Models from Indirect Observations [17.521712510832558]
This paper considers learning of the graphical structure of a $p$-dimensional random vector $X in Rp$ using both parametric and non-parametric methods.
Under mild conditions, we show that our graph-structure estimator can obtain the correct structure.
arXiv Detail & Related papers (2022-05-06T19:24:44Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Dynamics of position disordered Ising spins with a soft-core potential [4.243439940856083]
We study magnetization relaxation of Ising spins distributed randomly in a $d$-dimension.
In the homogeneous case, an analytic expression is derived at the thermodynamic limit.
In the opposite limit of $l_rho/R_cgg1$, a similar dynamics emerges at later time.
arXiv Detail & Related papers (2021-11-01T09:16:39Z) - Predicting the near-wall region of turbulence through convolutional
neural networks [0.0]
A neural-network-based approach to predict the near-wall behaviour in a turbulent open channel flow is investigated.
The fully-convolutional network (FCN) is trained to predict the two-dimensional velocity-fluctuation fields at $y+_rm target$.
FCN can take advantage of the self-similarity in the logarithmic region of the flow and predict the velocity-fluctuation fields at $y+ = 50$.
arXiv Detail & Related papers (2021-07-15T13:58:26Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.