The GCE in a New Light: Disentangling the $\gamma$-ray Sky with Bayesian
Graph Convolutional Neural Networks
- URL: http://arxiv.org/abs/2006.12504v2
- Date: Wed, 28 Oct 2020 04:30:15 GMT
- Title: The GCE in a New Light: Disentangling the $\gamma$-ray Sky with Bayesian
Graph Convolutional Neural Networks
- Authors: Florian List, Nicholas L. Rodd, Geraint F. Lewis, Ishaan Bhat
- Abstract summary: A fundamental question regarding the Galactic Center Excess (GCE) is whether the underlying structure is point-like or smooth.
In this work we weigh in on the problem using Bayesian graph convolutional neural networks.
We find that the NN estimates for the flux fractions from the background templates are consistent with the NPTF.
While suggestive, we do not claim a definitive resolution for the GCE, as the NN tends to underestimate the flux of point-sources peaked near the 1$sigma$ detection threshold.
- Score: 5.735035463793008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A fundamental question regarding the Galactic Center Excess (GCE) is whether
the underlying structure is point-like or smooth. This debate, often framed in
terms of a millisecond pulsar or annihilating dark matter (DM) origin for the
emission, awaits a conclusive resolution. In this work we weigh in on the
problem using Bayesian graph convolutional neural networks. In simulated data,
our neural network (NN) is able to reconstruct the flux of inner Galaxy
emission components to on average $\sim$0.5%, comparable to the non-Poissonian
template fit (NPTF). When applied to the actual $\textit{Fermi}$-LAT data, we
find that the NN estimates for the flux fractions from the background templates
are consistent with the NPTF; however, the GCE is almost entirely attributed to
smooth emission. While suggestive, we do not claim a definitive resolution for
the GCE, as the NN tends to underestimate the flux of point-sources peaked near
the 1$\sigma$ detection threshold. Yet the technique displays robustness to a
number of systematics, including reconstructing injected DM, diffuse
mismodeling, and unmodeled north-south asymmetries. So while the NN is hinting
at a smooth origin for the GCE at present, with further refinements we argue
that Bayesian Deep Learning is well placed to resolve this DM mystery.
Related papers
- Graph Neural Networks Do Not Always Oversmooth [46.57665708260211]
We study oversmoothing in graph convolutional networks (GCNs) by using their Gaussian process (GP) equivalence in the limit of infinitely many hidden features.
We identify a new, nonoversmoothing phase: if the initial weights of the network have sufficiently large variance, GCNs do not oversmooth, and node features remain informative even at large depth.
arXiv Detail & Related papers (2024-06-04T12:47:13Z) - Bayesian deep learning for cosmic volumes with modified gravity [0.0]
This study aims at extracting cosmological parameters from modified gravity (MG) simulations through deep neural networks endowed with uncertainty estimations.
We train both BNNs with real-space density fields and power-spectra from a suite of 2000 dark matter only particle mesh $N$-body simulations.
BNNs excel in accurately predicting parameters for $Omega_m$ and $sigma_8$ and their respective correlation with the MG parameter.
arXiv Detail & Related papers (2023-09-01T17:59:06Z) - Deep Graph Neural Networks via Posteriori-Sampling-based Node-Adaptive Residual Module [65.81781176362848]
Graph Neural Networks (GNNs) can learn from graph-structured data through neighborhood information aggregation.
As the number of layers increases, node representations become indistinguishable, which is known as over-smoothing.
We propose a textbfPosterior-Sampling-based, Node-distinguish Residual module (PSNR).
arXiv Detail & Related papers (2023-05-09T12:03:42Z) - Constraining cosmological parameters from N-body simulations with
Variational Bayesian Neural Networks [0.0]
Multiplicative normalizing flows (MNFs) are a family of approximate posteriors for the parameters of BNNs.
We have compared MNFs with respect to the standard BNNs, and the flipout estimator.
MNFs provide more realistic predictive distribution closer to the true posterior mitigating the bias introduced by the variational approximation.
arXiv Detail & Related papers (2023-01-09T16:07:48Z) - Convolutional Neural Networks on Manifolds: From Graphs and Back [122.06927400759021]
We propose a manifold neural network (MNN) composed of a bank of manifold convolutional filters and point-wise nonlinearities.
To sum up, we focus on the manifold model as the limit of large graphs and construct MNNs, while we can still bring back graph neural networks by the discretization of MNNs.
arXiv Detail & Related papers (2022-10-01T21:17:39Z) - Dim but not entirely dark: Extracting the Galactic Center Excess'
source-count distribution with neural nets [0.0]
We present a conceptually new approach that describes the PS and Poisson emission in a unified manner.
We find a faint GCE described by a median source-count distribution peaked at a flux of $sim4 times 10-11 textcounts textcm-2 texts-1$.
arXiv Detail & Related papers (2021-07-19T18:00:02Z) - Neural Contextual Bandits without Regret [47.73483756447701]
We propose algorithms for contextual bandits harnessing neural networks to approximate the unknown reward function.
We show that our approach converges to the optimal policy at a $tildemathcalO(T-1/2d)$ rate, where $d$ is the dimension of the context.
arXiv Detail & Related papers (2021-07-07T11:11:34Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Exact posterior distributions of wide Bayesian neural networks [51.20413322972014]
We show that the exact BNN posterior converges (weakly) to the one induced by the GP limit of the prior.
For empirical validation, we show how to generate exact samples from a finite BNN on a small dataset via rejection sampling.
arXiv Detail & Related papers (2020-06-18T13:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.