Dim but not entirely dark: Extracting the Galactic Center Excess'
source-count distribution with neural nets
- URL: http://arxiv.org/abs/2107.09070v1
- Date: Mon, 19 Jul 2021 18:00:02 GMT
- Title: Dim but not entirely dark: Extracting the Galactic Center Excess'
source-count distribution with neural nets
- Authors: Florian List, Nicholas L. Rodd, Geraint F. Lewis
- Abstract summary: We present a conceptually new approach that describes the PS and Poisson emission in a unified manner.
We find a faint GCE described by a median source-count distribution peaked at a flux of $sim4 times 10-11 textcounts textcm-2 texts-1$.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The two leading hypotheses for the Galactic Center Excess (GCE) in the
$\textit{Fermi}$ data are an unresolved population of faint millisecond pulsars
(MSPs) and dark-matter (DM) annihilation. The dichotomy between these
explanations is typically reflected by modeling them as two separate emission
components. However, point-sources (PSs) such as MSPs become statistically
degenerate with smooth Poisson emission in the ultra-faint limit (formally
where each source is expected to contribute much less than one photon on
average), leading to an ambiguity that can render questions such as whether the
emission is PS-like or Poissonian in nature ill-defined. We present a
conceptually new approach that describes the PS and Poisson emission in a
unified manner and only afterwards derives constraints on the Poissonian
component from the so obtained results. For the implementation of this
approach, we leverage deep learning techniques, centered around a neural
network-based method for histogram regression that expresses uncertainties in
terms of quantiles. We demonstrate that our method is robust against a number
of systematics that have plagued previous approaches, in particular DM / PS
misattribution. In the $\textit{Fermi}$ data, we find a faint GCE described by
a median source-count distribution (SCD) peaked at a flux of $\sim4 \times
10^{-11} \ \text{counts} \ \text{cm}^{-2} \ \text{s}^{-1}$ (corresponding to
$\sim3 - 4$ expected counts per PS), which would require $N \sim
\mathcal{O}(10^4)$ sources to explain the entire excess (median value $N =
\text{29,300}$ across the sky). Although faint, this SCD allows us to derive
the constraint $\eta_P \leq 66\%$ for the Poissonian fraction of the GCE flux
$\eta_P$ at 95% confidence, suggesting that a substantial amount of the GCE
flux is due to PSs.
Related papers
- Dimension-free Private Mean Estimation for Anisotropic Distributions [55.86374912608193]
Previous private estimators on distributions over $mathRd suffer from a curse of dimensionality.
We present an algorithm whose sample complexity has improved dependence on dimension.
arXiv Detail & Related papers (2024-11-01T17:59:53Z) - Minimax Optimality of Score-based Diffusion Models: Beyond the Density Lower Bound Assumptions [11.222970035173372]
kernel-based score estimator achieves an optimal mean square error of $widetildeOleft(n-1 t-fracd+22(tfracd2 vee 1)right)
We show that a kernel-based score estimator achieves an optimal mean square error of $widetildeOleft(n-1/2 t-fracd4right)$ upper bound for the total variation error of the distribution of the sample generated by the diffusion model under a mere sub-Gaussian
arXiv Detail & Related papers (2024-02-23T20:51:31Z) - Effective Minkowski Dimension of Deep Nonparametric Regression: Function
Approximation and Statistical Theories [70.90012822736988]
Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to intrinsic data structures.
This paper introduces a relaxed assumption that input data are concentrated around a subset of $mathbbRd$ denoted by $mathcalS$, and the intrinsic dimension $mathcalS$ can be characterized by a new complexity notation -- effective Minkowski dimension.
arXiv Detail & Related papers (2023-06-26T17:13:31Z) - Robust Sparse Mean Estimation via Incremental Learning [15.536082641659423]
In this paper, we study the problem of robust mean estimation, where the goal is to estimate a $k$-sparse mean from a collection of partially corrupted samples.
We present a simple mean estimator that overcomes both challenges under moderate conditions.
Our method does not need any prior knowledge of the sparsity level $k$.
arXiv Detail & Related papers (2023-05-24T16:02:28Z) - Near Sample-Optimal Reduction-based Policy Learning for Average Reward
MDP [58.13930707612128]
This work considers the sample complexity of obtaining an $varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP)
We prove an upper bound of $widetilde O(H varepsilon-3 ln frac1delta)$ samples per state-action pair, where $H := sp(h*)$ is the span of bias of any optimal policy, $varepsilon$ is the accuracy and $delta$ is the failure probability.
arXiv Detail & Related papers (2022-12-01T15:57:58Z) - Causal Bandits for Linear Structural Equation Models [58.2875460517691]
This paper studies the problem of designing an optimal sequence of interventions in a causal graphical model.
It is assumed that the graph's structure is known and has $N$ nodes.
Two algorithms are proposed for the frequentist (UCB-based) and Bayesian settings.
arXiv Detail & Related papers (2022-08-26T16:21:31Z) - Overparametrized linear dimensionality reductions: From projection
pursuit to two-layer neural networks [10.368585938419619]
Given a cloud of $n$ data points in $mathbbRd$, consider all projections onto $m$-dimensional subspaces of $mathbbRd$.
What does this collection of probability distributions look like when $n,d$ grow large?
Denoting by $mathscrF_m, alpha$ the set of probability distributions in $mathbbRm$ that arise as low-dimensional projections in this limit, we establish new inner and outer bounds on $mathscrF_
arXiv Detail & Related papers (2022-06-14T00:07:33Z) - Settling the Sample Complexity of Model-Based Offline Reinforcement
Learning [50.5790774201146]
offline reinforcement learning (RL) learns using pre-collected data without further exploration.
Prior algorithms or analyses either suffer from suboptimal sample complexities or incur high burn-in cost to reach sample optimality.
We demonstrate that the model-based (or "plug-in") approach achieves minimax-optimal sample complexity without burn-in cost.
arXiv Detail & Related papers (2022-04-11T17:26:19Z) - The GCE in a New Light: Disentangling the $\gamma$-ray Sky with Bayesian
Graph Convolutional Neural Networks [5.735035463793008]
A fundamental question regarding the Galactic Center Excess (GCE) is whether the underlying structure is point-like or smooth.
In this work we weigh in on the problem using Bayesian graph convolutional neural networks.
We find that the NN estimates for the flux fractions from the background templates are consistent with the NPTF.
While suggestive, we do not claim a definitive resolution for the GCE, as the NN tends to underestimate the flux of point-sources peaked near the 1$sigma$ detection threshold.
arXiv Detail & Related papers (2020-06-22T18:00:00Z) - Kernel-Based Reinforcement Learning: A Finite-Time Analysis [53.47210316424326]
We introduce Kernel-UCBVI, a model-based optimistic algorithm that leverages the smoothness of the MDP and a non-parametric kernel estimator of the rewards.
We empirically validate our approach in continuous MDPs with sparse rewards.
arXiv Detail & Related papers (2020-04-12T12:23:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.