LADDER: Revisiting the Cosmic Distance Ladder with Deep Learning Approaches and Exploring its Applications
- URL: http://arxiv.org/abs/2401.17029v2
- Date: Thu, 18 Jul 2024 15:31:24 GMT
- Title: LADDER: Revisiting the Cosmic Distance Ladder with Deep Learning Approaches and Exploring its Applications
- Authors: Rahul Shah, Soumadeep Saha, Purba Mukherjee, Utpal Garain, Supratik Pal,
- Abstract summary: LADDER is trained on the apparent magnitude data from the Pantheon Type Ia supernovae compilation.
We demonstrate applications of our method in the cosmological context, including serving as a model-independent tool for consistency checks.
- Score: 1.4330510916280879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the prospect of reconstructing the ''cosmic distance ladder'' of the Universe using a novel deep learning framework called LADDER - Learning Algorithm for Deep Distance Estimation and Reconstruction. LADDER is trained on the apparent magnitude data from the Pantheon Type Ia supernovae compilation, incorporating the full covariance information among data points, to produce predictions along with corresponding errors. After employing several validation tests with a number of deep learning models, we pick LADDER as the best performing one. We then demonstrate applications of our method in the cosmological context, including serving as a model-independent tool for consistency checks for other datasets like baryon acoustic oscillations, calibration of high-redshift datasets such as gamma ray bursts, and use as a model-independent mock catalog generator for future probes. Our analysis advocates for careful consideration of machine learning techniques applied to cosmological contexts.
Related papers
- Spherinator and HiPSter: Representation Learning for Unbiased Knowledge Discovery from Simulations [0.0]
We describe a new, unbiased, and machine learning based approach to obtain useful scientific insights from a broad range of simulations.
Our concept is based on applying nonlinear dimensionality reduction to learn compact representations of the data in a low-dimensional space.
We present a prototype using a rotational invariant hyperspherical variational convolutional autoencoder, utilizing a power distribution in the latent space, and trained on galaxies from IllustrisTNG simulation.
arXiv Detail & Related papers (2024-06-06T07:34:58Z) - deep-REMAP: Parameterization of Stellar Spectra Using Regularized
Multi-Task Learning [0.0]
Deep-Regularized Ensemble-based Multi-task Learning with Asymmetric Loss for Probabilistic Inference ($rmdeep-REMAP$)
We develop a novel framework that utilizes the rich synthetic spectra from the PHOENIX library and observational data from the MARVELS survey to accurately predict stellar atmospheric parameters.
arXiv Detail & Related papers (2023-11-07T05:41:48Z) - Domain Adaptive Graph Neural Networks for Constraining Cosmological Parameters Across Multiple Data Sets [40.19690479537335]
We show that DA-GNN achieves higher accuracy and robustness on cross-dataset tasks.
This shows that DA-GNNs are a promising method for extracting domain-independent cosmological information.
arXiv Detail & Related papers (2023-11-02T20:40:21Z) - Asteroids co-orbital motion classification based on Machine Learning [0.0]
We consider four different kinds of motion in mean motion resonance with the planet, taking the ephemerides of real asteroids from the JPL Horizons system.
The time series of the variable theta are studied with a data analysis pipeline defined ad hoc for the problem.
We show how the algorithms are able to identify and classify correctly the time series, with a high degree of performance.
arXiv Detail & Related papers (2023-09-19T13:19:31Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Improving Astronomical Time-series Classification via Data Augmentation
with Generative Adversarial Networks [1.2891210250935146]
We propose a data augmentation methodology based on Generative Adrial Networks (GANs) to generate a variety of synthetic light curves from variable stars.
The classification accuracy of variable stars is improved significantly when training with synthetic data and testing with real data.
arXiv Detail & Related papers (2022-05-13T16:39:54Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - Contrastive Neighborhood Alignment [81.65103777329874]
We present Contrastive Neighborhood Alignment (CNA), a manifold learning approach to maintain the topology of learned features.
The target model aims to mimic the local structure of the source representation space using a contrastive loss.
CNA is illustrated in three scenarios: manifold learning, where the model maintains the local topology of the original data in a dimension-reduced space; model distillation, where a small student model is trained to mimic a larger teacher; and legacy model update, where an older model is replaced by a more powerful one.
arXiv Detail & Related papers (2022-01-06T04:58:31Z) - CONSAC: Robust Multi-Model Fitting by Conditional Sample Consensus [62.86856923633923]
We present a robust estimator for fitting multiple parametric models of the same form to noisy measurements.
In contrast to previous works, which resorted to hand-crafted search strategies for multiple model detection, we learn the search strategy from data.
For self-supervised learning of the search, we evaluate the proposed algorithm on multi-homography estimation and demonstrate an accuracy that is superior to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-08T17:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.