Investigation on deep learning-based galaxy image translation models
- URL: http://arxiv.org/abs/2508.03291v1
- Date: Tue, 05 Aug 2025 10:08:26 GMT
- Title: Investigation on deep learning-based galaxy image translation models
- Authors: Hengxin Ruan, Qiufan Lin, Shupei Chen, Yang Wang, Wei Zhang,
- Abstract summary: Galaxy image translation is an important application in galaxy physics and cosmology.<n>Most endeavors on image translation focus on the pixel-level and morphology-level statistics of galaxy images.<n>We investigated the effectiveness of generative models in preserving high-order physical information along with pixel-level and morphology-level information.
- Score: 6.18270362513197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Galaxy image translation is an important application in galaxy physics and cosmology. With deep learning-based generative models, image translation has been performed for image generation, data quality enhancement, information extraction, and generalized for other tasks such as deblending and anomaly detection. However, most endeavors on image translation primarily focus on the pixel-level and morphology-level statistics of galaxy images. There is a lack of discussion on the preservation of complex high-order galaxy physical information, which would be more challenging but crucial for studies that rely on high-fidelity image translation. Therefore, we investigated the effectiveness of generative models in preserving high-order physical information (represented by spectroscopic redshift) along with pixel-level and morphology-level information. We tested four representative models, i.e. a Swin Transformer, an SRGAN, a capsule network, and a diffusion model, using the SDSS and CFHTLS galaxy images. We found that these models show different levels of incapabilities in retaining redshift information, even if the global structures of galaxies and morphology-level statistics can be roughly reproduced. In particular, the cross-band peak fluxes of galaxies were found to contain meaningful redshift information, whereas they are subject to noticeable uncertainties in the translation of images, which may substantially be due to the nature of many-to-many mapping. Nonetheless, imperfect translated images may still contain a considerable amount of information and thus hold promise for downstream applications for which high image fidelity is not strongly required. Our work can facilitate further research on how complex physical information is manifested on galaxy images, and it provides implications on the development of image translation models for scientific use.
Related papers
- Can AI Dream of Unseen Galaxies? Conditional Diffusion Model for Galaxy Morphology Augmentation [4.3933321767775135]
We propose a conditional diffusion model to synthesize realistic galaxy images for augmenting machine learning data.<n>We show that our model generates diverse, high-fidelity galaxy images closely adhere to the specified morphological feature conditions.<n>This model enables generative extrapolation to project well-annotated data into unseen domains and advancing rare object detection.
arXiv Detail & Related papers (2025-06-19T11:44:09Z) - PixCell: A generative foundation model for digital histopathology images [49.00921097924924]
We introduce PixCell, the first diffusion-based generative foundation model for histopathology.<n>We train PixCell on PanCan-30M, a vast, diverse dataset derived from 69,184 H&E-stained whole slide images covering various cancer types.
arXiv Detail & Related papers (2025-06-05T15:14:32Z) - A Versatile Framework for Analyzing Galaxy Image Data by Implanting Human-in-the-loop on a Large Vision Model [14.609681101463334]
We present a framework for general analysis of galaxy images based on a large vision model (LVM) plus downstream tasks (DST)
Considering the low signal-to-noise ratio of galaxy images, we have incorporated a Human-in-the-loop (HITL) module into our large vision model.
For object detection, trained by 1000 data points, our DST upon the LVM achieves an accuracy of 96.7%, while ResNet50 plus Mask R-CNN gives an accuracy of 93.1%.
arXiv Detail & Related papers (2024-05-17T16:29:27Z) - xT: Nested Tokenization for Larger Context in Large Images [79.37673340393475]
xT is a framework for vision transformers which aggregates global context with local details.
We are able to increase accuracy by up to 8.6% on challenging classification tasks.
arXiv Detail & Related papers (2024-03-04T10:29:58Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Spiral-Elliptical automated galaxy morphology classification from
telescope images [0.40792653193642503]
We develop two novel galaxy morphology statistics, descent average and descent variance, which can be efficiently extracted from telescope galaxy images.
We utilize the galaxy image data from the Sloan Digital Sky Survey to demonstrate the effective performance of our proposed image statistics.
arXiv Detail & Related papers (2023-10-10T22:36:52Z) - BioGAN: An unpaired GAN-based image to image translation model for
microbiological images [1.6427658855248812]
We develop an unpaired GAN-based (Generative Adversarial Network) image to image translation model for microbiological images.
We propose a novel design for a GAN model, BioGAN, by utilizing Adversarial and Perceptual loss in order to transform high level features of laboratory-taken images into field images.
arXiv Detail & Related papers (2023-06-09T19:30:49Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - Multi-Texture GAN: Exploring the Multi-Scale Texture Translation for
Brain MR Images [1.9163481966968943]
A significant percentage of existing algorithms cannot explicitly exploit and preserve texture details from target scanners.
In this paper, we design a multi-scale texture transfer to enrich the reconstruction images with more details.
Our method achieves superior results in inter-protocol or inter-scanner translation over state-of-the-art methods.
arXiv Detail & Related papers (2021-02-14T19:14:06Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.