Automatic Machine Learning Framework to Study Morphological Parameters of AGN Host Galaxies within $z < 1.4$ in the Hyper Supreme-Cam Wide Survey
- URL: http://arxiv.org/abs/2501.15739v1
- Date: Mon, 27 Jan 2025 03:04:34 GMT
- Title: Automatic Machine Learning Framework to Study Morphological Parameters of AGN Host Galaxies within $z < 1.4$ in the Hyper Supreme-Cam Wide Survey
- Authors: Chuan Tian, C. Megan Urry, Aritra Ghosh, Daisuke Nagai, Tonima T. Ananna, Meredith C. Powell, Connor Auge, Aayush Mishra, David B. Sanders, Nico Cappelluti, Kevin Schawinski,
- Abstract summary: We present a machine learning framework to estimate posterior distributions of bulge-to-total light ratio, half-light radius, and flux for AGN host galaxies.
We use PSFGAN to decompose the AGN point source light from its host galaxy, and invoke the Galaxy Morphology Posterior Estimation Network (GaMPEN) to estimate morphological parameters.
Our framework runs at least three orders of magnitude faster than traditional light-profile fitting methods.
- Score: 4.6218496439194805
- License:
- Abstract: We present a composite machine learning framework to estimate posterior probability distributions of bulge-to-total light ratio, half-light radius, and flux for Active Galactic Nucleus (AGN) host galaxies within $z<1.4$ and $m<23$ in the Hyper Supreme-Cam Wide survey. We divide the data into five redshift bins: low ($0<z<0.25$), mid ($0.25<z<0.5$), high ($0.5<z<0.9$), extra ($0.9<z<1.1$) and extreme ($1.1<z<1.4$), and train our models independently in each bin. We use PSFGAN to decompose the AGN point source light from its host galaxy, and invoke the Galaxy Morphology Posterior Estimation Network (GaMPEN) to estimate morphological parameters of the recovered host galaxy. We first trained our models on simulated data, and then fine-tuned our algorithm via transfer learning using labeled real data. To create training labels for transfer learning, we used GALFIT to fit $\sim 20,000$ real HSC galaxies in each redshift bin. We comprehensively examined that the predicted values from our final models agree well with the GALFIT values for the vast majority of cases. Our PSFGAN + GaMPEN framework runs at least three orders of magnitude faster than traditional light-profile fitting methods, and can be easily retrained for other morphological parameters or on other datasets with diverse ranges of resolutions, seeing conditions, and signal-to-noise ratios, making it an ideal tool for analyzing AGN host galaxies from large surveys coming soon from the Rubin-LSST, Euclid, and Roman telescopes.
Related papers
- The Optimization Landscape of SGD Across the Feature Learning Strength [102.1353410293931]
We study the effect of scaling $gamma$ across a variety of models and datasets in the online training setting.
We find that optimal online performance is often found at large $gamma$.
Our findings indicate that analytical study of the large-$gamma$ limit may yield useful insights into the dynamics of representation learning in performant models.
arXiv Detail & Related papers (2024-10-06T22:30:14Z) - SimBIG: Field-level Simulation-Based Inference of Galaxy Clustering [2.3988372195566443]
We present the first simulation-based inference ( SBI) of cosmological parameters from field-level analysis of galaxy clustering.
We apply SimBIG to a subset of the BOSS CMASS galaxy sample using a convolutional neural network with weight averaging to perform massive data compression of the galaxy field.
This work not only presents competitive cosmological constraints but also introduces novel methods for leveraging additional cosmological information in upcoming galaxy surveys like DESI, PFS, and Euclid.
arXiv Detail & Related papers (2023-10-23T18:05:32Z) - Robust Field-level Likelihood-free Inference with Galaxies [0.0]
We train graph neural networks to perform field-level likelihood-free inference using galaxy catalogs from state-of-the-art hydrodynamic simulations of the CAMELS project.
Our models are rotational, translational, and permutation invariant and do not impose any cut on scale.
We find that our models are robust to changes in astrophysics, subgrid physics, and subhalo/galaxy finder.
arXiv Detail & Related papers (2023-02-27T19:26:13Z) - Using Machine Learning to Determine Morphologies of $z<1$ AGN Host
Galaxies in the Hyper Suprime-Cam Wide Survey [4.747578120823036]
We present a machine-learning framework to accurately characterize morphologies of AGN host galaxies within $z1$.
By first training on a large number of simulated galaxies, then fine-tuning using far fewer classified real galaxies, our framework predicts the actual morphology for $sim$ $60%-70%$ host galaxies from test sets.
Our models achieve disk precision of $96%/82%/79%$ and bulge precision of $90%/90%/80%$, at thresholds corresponding to indeterminate fractions of $30%/43%/42
arXiv Detail & Related papers (2022-12-20T04:00:58Z) - Cosmology from Galaxy Redshift Surveys with PointNet [65.89809800010927]
In cosmology, galaxy redshift surveys resemble such a permutation invariant collection of positions in space.
We employ a textitPointNet-like neural network to regress the values of the cosmological parameters directly from point cloud data.
Our implementation of PointNets can analyse inputs of $mathcalO(104) - mathcalO(105)$ galaxies at a time, which improves upon earlier work for this application by roughly two orders of magnitude.
arXiv Detail & Related papers (2022-11-22T15:35:05Z) - Neural Inference of Gaussian Processes for Time Series Data of Quasars [72.79083473275742]
We introduce a new model that enables it to describe quasar spectra completely.
We also introduce a new method of inference of Gaussian process parameters, which we call $textitNeural Inference$.
The combination of both the CDRW model and Neural Inference significantly outperforms the baseline DRW and MLE.
arXiv Detail & Related papers (2022-11-17T13:01:26Z) - Hierarchical Inference of the Lensing Convergence from Photometric
Catalogs with Bayesian Graph Neural Networks [0.0]
We introduce fluctuations on galaxy-galaxy lensing scales of $sim$1$''$ and extract random sightlines to train our BGNN.
For each test set of 1,000 sightlines, the BGNN infers the individual $kappa$ posteriors, which we combine in a hierarchical Bayesian model.
For a test field well sampled by the training set, the BGNN recovers the population mean of $kappa$ precisely and without bias.
arXiv Detail & Related papers (2022-11-15T00:29:20Z) - Learning cosmology and clustering with cosmic graphs [0.0]
We train deep learning models on thousands of galaxy catalogues from the state-of-the-art hydrodynamic simulations of the CAMELS project.
We first show that GNNs can learn to compute the power spectrum of galaxy catalogues with a few percent accuracy.
We then train GNNs to perform likelihood-free inference at the galaxy-field level.
arXiv Detail & Related papers (2022-04-28T18:00:02Z) - Satellite galaxy abundance dependency on cosmology in Magneticum
simulations [101.18253437732933]
We build an emulator of satellite abundance based on cosmological parameters.
We find that $A$ and $beta$ depend on cosmological parameters, even if weakly.
We also show that satellite abundance cosmology dependency differs between full-physics (FP) simulations, dark-matter only (DMO) and non-radiative simulations.
arXiv Detail & Related papers (2021-10-11T18:00:02Z) - DeepShadows: Separating Low Surface Brightness Galaxies from Artifacts
using Deep Learning [70.80563014913676]
We investigate the use of convolutional neural networks (CNNs) for the problem of separating low-surface-brightness galaxies from artifacts in survey images.
We show that CNNs offer a very promising path in the quest to study the low-surface-brightness universe.
arXiv Detail & Related papers (2020-11-24T22:51:08Z) - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal
Sample Complexity [67.02490430380415]
We show that model-based MARL achieves a sample complexity of $tilde O(|S||B|(gamma)-3epsilon-2)$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error.
We also show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge.
arXiv Detail & Related papers (2020-07-15T03:25:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.