Mapping Galaxy Images Across Ultraviolet, Visible and Infrared Bands Using Generative Deep Learning
- URL: http://arxiv.org/abs/2501.15149v1
- Date: Sat, 25 Jan 2025 09:13:21 GMT
- Title: Mapping Galaxy Images Across Ultraviolet, Visible and Infrared Bands Using Generative Deep Learning
- Authors: Youssef Zaazou, Alex Bihlo, Terrence S. Tricco,
- Abstract summary: generative deep learning can translate galaxy observations across ultraviolet, visible, and infrared photometric bands.
We develop and validate a supervised image-to-image model capable of performing both band and extrapolation.
Our model can be used to predict real-world observations, using data from the DECaLS survey as a case study.
- Score: 0.0
- License:
- Abstract: We demonstrate that generative deep learning can translate galaxy observations across ultraviolet, visible, and infrared photometric bands. Leveraging mock observations from the Illustris simulations, we develop and validate a supervised image-to-image model capable of performing both band interpolation and extrapolation. The resulting trained models exhibit high fidelity in generating outputs, as verified by both general image comparison metrics (MAE, SSIM, PSNR) and specialized astronomical metrics (GINI coefficient, M20). Moreover, we show that our model can be used to predict real-world observations, using data from the DECaLS survey as a case study. These findings highlight the potential of generative learning to augment astronomical datasets, enabling efficient exploration of multi-band information in regions where observations are incomplete. This work opens new pathways for optimizing mission planning, guiding high-resolution follow-ups, and enhancing our understanding of galaxy morphology and evolution.
Related papers
- Spherinator and HiPSter: Representation Learning for Unbiased Knowledge Discovery from Simulations [0.0]
We describe a new, unbiased, and machine learning based approach to obtain useful scientific insights from a broad range of simulations.
Our concept is based on applying nonlinear dimensionality reduction to learn compact representations of the data in a low-dimensional space.
We present a prototype using a rotational invariant hyperspherical variational convolutional autoencoder, utilizing a power distribution in the latent space, and trained on galaxies from IllustrisTNG simulation.
arXiv Detail & Related papers (2024-06-06T07:34:58Z) - Preliminary Report on Mantis Shrimp: a Multi-Survey Computer Vision
Photometric Redshift Model [0.431625343223275]
Photometric redshift estimation is a well-established subfield of astronomy.
Mantis Shrimp is a computer vision model for photometric redshift estimation that fuses ultra-violet (GALEX), optical (PanSTARRS), and infrared (UnWISE) imagery.
arXiv Detail & Related papers (2024-02-05T21:44:19Z) - Domain Adaptive Graph Neural Networks for Constraining Cosmological Parameters Across Multiple Data Sets [40.19690479537335]
We show that DA-GNN achieves higher accuracy and robustness on cross-dataset tasks.
This shows that DA-GNNs are a promising method for extracting domain-independent cosmological information.
arXiv Detail & Related papers (2023-11-02T20:40:21Z) - StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized
Image-Dialogue Data [129.92449761766025]
We propose a novel data collection methodology that synchronously synthesizes images and dialogues for visual instruction tuning.
This approach harnesses the power of generative models, marrying the abilities of ChatGPT and text-to-image generative models.
Our research includes comprehensive experiments conducted on various datasets.
arXiv Detail & Related papers (2023-08-20T12:43:52Z) - A Comparative Study on Generative Models for High Resolution Solar
Observation Imaging [59.372588316558826]
This work investigates capabilities of current state-of-the-art generative models to accurately capture the data distribution behind observed solar activity states.
Using distributed training on supercomputers, we are able to train generative models for up to 1024x1024 resolution that produce high quality samples indistinguishable to human experts.
arXiv Detail & Related papers (2023-04-14T14:40:32Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - Supernova Light Curves Approximation based on Neural Network Models [53.180678723280145]
Photometric data-driven classification of supernovae becomes a challenge due to the appearance of real-time processing of big data in astronomy.
Recent studies have demonstrated the superior quality of solutions based on various machine learning models.
We study the application of multilayer perceptron (MLP), bayesian neural network (BNN), and normalizing flows (NF) to approximate observations for a single light curve.
arXiv Detail & Related papers (2022-06-27T13:46:51Z) - Realistic galaxy image simulation via score-based generative models [0.0]
We show that a score-based generative model can be used to produce realistic yet fake images that mimic observations of galaxies.
Subjectively, the generated galaxies are highly realistic when compared with samples from the real dataset.
arXiv Detail & Related papers (2021-11-02T16:27:08Z) - Visual Distant Supervision for Scene Graph Generation [66.10579690929623]
Scene graph models usually require supervised learning on large quantities of labeled data with intensive human annotation.
We propose visual distant supervision, a novel paradigm of visual relation learning, which can train scene graph models without any human-labeled data.
Comprehensive experimental results show that our distantly supervised model outperforms strong weakly supervised and semi-supervised baselines.
arXiv Detail & Related papers (2021-03-29T06:35:24Z) - Self-Supervised Representation Learning for Astronomical Images [1.0499611180329804]
Self-supervised learning recovers representations of sky survey images that are semantically useful.
We show that our approach can achieve the accuracy of supervised models while using 2-4 times fewer labels for training.
arXiv Detail & Related papers (2020-12-24T03:25:36Z) - Interpreting Galaxy Deblender GAN from the Discriminator's Perspective [50.12901802952574]
This research focuses on behaviors of one of the network's major components, the Discriminator, which plays a vital role but is often overlooked.
We demonstrate that our method clearly reveals attention areas of the Discriminator when differentiating generated galaxy images from ground truth images.
arXiv Detail & Related papers (2020-01-17T04:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.