Extracting the gamma-ray source-count distribution below the Fermi-LAT detection limit with deep learning
- URL: http://arxiv.org/abs/2302.01947v2
- Date: Wed, 15 May 2024 13:11:38 GMT
- Title: Extracting the gamma-ray source-count distribution below the Fermi-LAT detection limit with deep learning
- Authors: Aurelio Amerio, Alessandro Cuoco, Nicolao Fornengo,
- Abstract summary: We train a convolutional neural network on synthetic 2-dimensional sky-maps and incorporate the Fermi-LAT instrumental response functions.
The trained neural network is then applied to the Fermi-LAT data, from which we estimate the source count distribution down to flux levels a factor of 50 below the Fermi-LAT threshold.
- Score: 44.99833362998488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We reconstruct the extra-galactic gamma-ray source-count distribution, or $dN/dS$, of resolved and unresolved sources by adopting machine learning techniques. Specifically, we train a convolutional neural network on synthetic 2-dimensional sky-maps, which are built by varying parameters of underlying source-counts models and incorporate the Fermi-LAT instrumental response functions. The trained neural network is then applied to the Fermi-LAT data, from which we estimate the source count distribution down to flux levels a factor of 50 below the Fermi-LAT threshold. We perform our analysis using 14 years of data collected in the $(1,10)$ GeV energy range. The results we obtain show a source count distribution which, in the resolved regime, is in excellent agreement with the one derived from catalogued sources, and then extends as $dN/dS \sim S^{-2}$ in the unresolved regime, down to fluxes of $5 \cdot 10^{-12}$ cm$^{-2}$ s$^{-1}$. The neural network architecture and the devised methodology have the flexibility to enable future analyses to study the energy dependence of the source-count distribution.
Related papers
- Score-based Source Separation with Applications to Digital Communication
Signals [72.6570125649502]
We propose a new method for separating superimposed sources using diffusion-based generative models.
Motivated by applications in radio-frequency (RF) systems, we are interested in sources with underlying discrete nature.
Our method can be viewed as a multi-source extension to the recently proposed score distillation sampling scheme.
arXiv Detail & Related papers (2023-06-26T04:12:40Z) - Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative
Models [49.81937966106691]
We develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models.
In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach.
arXiv Detail & Related papers (2023-06-15T16:30:08Z) - FeDXL: Provable Federated Learning for Deep X-Risk Optimization [105.17383135458897]
We tackle a novel federated learning (FL) problem for optimizing a family of X-risks, to which no existing algorithms are applicable.
The challenges for designing an FL algorithm for X-risks lie in the non-decomability of the objective over multiple machines and the interdependency between different machines.
arXiv Detail & Related papers (2022-10-26T00:23:36Z) - Bounding the Width of Neural Networks via Coupled Initialization -- A
Worst Case Analysis [121.9821494461427]
We show how to significantly reduce the number of neurons required for two-layer ReLU networks.
We also prove new lower bounds that improve upon prior work, and that under certain assumptions, are best possible.
arXiv Detail & Related papers (2022-06-26T06:51:31Z) - Accelerated Bayesian SED Modeling using Amortized Neural Posterior
Estimation [0.0]
We present an alternative scalable approach to rigorous Bayesian inference using Amortized Neural Posterior Estimation (ANPE)
ANPE is a simulation-based inference method that employs neural networks to estimate the posterior probability distribution.
We present, and publicly release, $rm SEDflow$, an ANPE method to produce posteriors of the recent Hahn et al. (2022) SED model from optical photometry.
arXiv Detail & Related papers (2022-03-14T18:00:03Z) - On Reward-Free RL with Kernel and Neural Function Approximations:
Single-Agent MDP and Markov Game [140.19656665344917]
We study the reward-free RL problem, where an agent aims to thoroughly explore the environment without any pre-specified reward function.
We tackle this problem under the context of function approximation, leveraging powerful function approximators.
We establish the first provably efficient reward-free RL algorithm with kernel and neural function approximators.
arXiv Detail & Related papers (2021-10-19T07:26:33Z) - Dim but not entirely dark: Extracting the Galactic Center Excess'
source-count distribution with neural nets [0.0]
We present a conceptually new approach that describes the PS and Poisson emission in a unified manner.
We find a faint GCE described by a median source-count distribution peaked at a flux of $sim4 times 10-11 textcounts textcm-2 texts-1$.
arXiv Detail & Related papers (2021-07-19T18:00:02Z) - Primordial non-Gaussianity from the Completed SDSS-IV extended Baryon
Oscillation Spectroscopic Survey I: Catalogue Preparation and Systematic
Mitigation [3.2855185490071444]
We investigate the large-scale clustering of the final spectroscopic sample of quasars from the recently completed extended Baryon Oscillation Spectroscopic Survey (eBOSS)
We develop a neural network-based approach to mitigate spurious fluctuations in the density field caused by spatial variations in the quality of the imaging data used to select targets for follow-up spectroscopy.
arXiv Detail & Related papers (2021-06-25T16:01:19Z) - Machine Learning for automatic identification of new minor species [1.617647375371818]
We propose a new method based on unsupervised machine learning, to automatically detect new minor species.
We approximate the dataset non-linearity by a linear mixture of abundance and source spectra (endmembers)
Our approach is able to detect chemical compounds present in form of 100 hidden spectra out of $104$, at 1.5 times the noise level.
arXiv Detail & Related papers (2020-12-15T09:51:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.