StrCGAN: A Generative Framework for Stellar Image Restoration
- URL: http://arxiv.org/abs/2509.19805v2
- Date: Thu, 25 Sep 2025 07:22:50 GMT
- Title: StrCGAN: A Generative Framework for Stellar Image Restoration
- Authors: Shantanusinh Parmar,
- Abstract summary: We introduce StrCGAN, a generative model designed to enhance low-resolution astrophotography images.<n>Our goal is to reconstruct high-fidelity ground truth-like representations of celestial objects, a task that is challenging due to the limited resolution and quality of small-telescope observations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce StrCGAN (Stellar Cyclic GAN), a generative model designed to enhance low-resolution astrophotography images. Our goal is to reconstruct high-fidelity ground truth-like representations of celestial objects, a task that is challenging due to the limited resolution and quality of small-telescope observations such as the MobilTelesco dataset. Traditional models such as CycleGAN provide a foundation for image-to-image translation but are restricted to 2D mappings and often distort the morphology of stars and galaxies. To overcome these limitations, we extend the CycleGAN framework with three key innovations: 3D convolutional layers to capture volumetric spatial correlations, multi-spectral fusion to align optical and near-infrared (NIR) domains, and astrophysical regularization modules to preserve stellar morphology. Ground-truth references from multi-mission all-sky surveys spanning optical to NIR guide the training process, ensuring that reconstructions remain consistent across spectral bands. Together, these components allow StrCGAN to generate reconstructions that are not only visually sharper but also physically consistent, outperforming standard GAN models in the task of astrophysical image enhancement.
Related papers
- HS-3D-NeRF: 3D Surface and Hyperspectral Reconstruction From Stationary Hyperspectral Images Using Multi-Channel NeRFs [6.074219505064872]
We introduce HSI-SC-NeRF, a stationary-camera multi-channel NeRF framework for high- throughput hyperspectral 3D reconstruction.<n> Experiments on three agricultural produce samples demonstrate high spatial reconstruction accuracy and strong spectral fidelity.
arXiv Detail & Related papers (2026-02-18T23:29:04Z) - From Pixels to Views: Learning Angular-Aware and Physics-Consistent Representations for Light Field Microscopy [12.900713570104749]
We introduce three key contributions to learning-based 3D reconstruction in light field microscopy.<n>First, we construct the XLFM-Zebrafish benchmark, a large-scale dataset and evaluation suite for XLFM reconstruction.<n>Second, we propose Masked View Modeling for Light Fields (MVN-LF), a self-supervised task that learns angular priors by predicting occluded views.<n>Third, we formulate the Optical Rendering Consistency Loss (ORC Loss), a differentiable rendering constraint that enforces alignment between predicted volumes and their PSF-based forward projections.
arXiv Detail & Related papers (2025-10-26T08:28:05Z) - GOGS: High-Fidelity Geometry and Relighting for Glossy Objects via Gaussian Surfels [0.9392167468538465]
Inverse rendering of glossy objects from RGB imagery remains fundamentally limited by inherent ambiguity.<n>We propose GOGS, a novel two-stage framework based on 2D Gaussian surfels.<n>We demonstrate state-of-the-art performance in geometry reconstruction, material separation, and photorealistic relighting under novel illuminations.
arXiv Detail & Related papers (2025-08-20T09:35:40Z) - Rotation Equivariant Arbitrary-scale Image Super-Resolution [62.41329042683779]
The arbitrary-scale image super-resolution (ASISR) aims to achieve arbitrary-scale high-resolution recoveries from a low-resolution input image.<n>We make efforts to construct a rotation equivariant ASISR method in this study.
arXiv Detail & Related papers (2025-08-07T08:51:03Z) - STAR: A Benchmark for Astronomical Star Fields Super-Resolution [51.79340280382437]
We propose STAR, a large-scale astronomical SR dataset containing 54,738 flux-consistent star field image pairs.<n>We propose a Flux-Invariant Super Resolution (FISR) model that could accurately infer the flux-consistent high-resolution images from input photometry.
arXiv Detail & Related papers (2025-07-22T09:28:28Z) - Reflections Unlock: Geometry-Aware Reflection Disentanglement in 3D Gaussian Splatting for Photorealistic Scenes Rendering [51.223347330075576]
Ref-Unlock is a novel geometry-aware reflection modeling framework based on 3D Gaussian Splatting.<n>Our approach employs a dual-branch representation with high-order spherical harmonics to capture high-frequency reflective details.<n>Our method thus offers an efficient and generalizable solution for realistic rendering of reflective scenes.
arXiv Detail & Related papers (2025-07-08T15:45:08Z) - Bayesian Deconvolution of Astronomical Images with Diffusion Models: Quantifying Prior-Driven Features in Reconstructions [40.13294159814764]
Deconvolution of astronomical images is a key aspect of recovering the intrinsic properties of celestial objects.<n>This paper explores the use of diffusion models (DMs) and the Diffusion Posterior Sampling (DPS) algorithm to solve this inverse problem task.
arXiv Detail & Related papers (2024-11-28T14:00:00Z) - Towards Degradation-Robust Reconstruction in Generalizable NeRF [58.33351079982745]
Generalizable Radiance Field (GNeRF) across scenes has been proven to be an effective way to avoid per-scene optimization.
There has been limited research on the robustness of GNeRFs to different types of degradation present in the source images.
arXiv Detail & Related papers (2024-11-18T16:13:47Z) - Reconstructing Satellites in 3D from Amateur Telescope Images [44.20773507571372]
We propose a novel computational imaging framework that overcomes obstacles by integrating a hybrid image pre-processing pipeline.<n>We validate our approach on both synthetic satellite datasets and on-sky observations of China's Tiangong Space Station and the International Space Station.<n>Our framework enables high-fidelity 3D satellite monitoring from Earth, offering a cost-effective alternative for space situational awareness.
arXiv Detail & Related papers (2024-04-29T03:13:09Z) - SphereDiffusion: Spherical Geometry-Aware Distortion Resilient Diffusion Model [63.685132323224124]
Controllable spherical panoramic image generation holds substantial applicative potential across a variety of domains.
In this paper, we introduce a novel framework of SphereDiffusion to address these unique challenges.
Experiments on Structured3D dataset show that SphereDiffusion significantly improves the quality of controllable spherical image generation and relatively reduces around 35% FID on average.
arXiv Detail & Related papers (2024-03-15T06:26:46Z) - Light Field Diffusion for Single-View Novel View Synthesis [32.59286750410843]
Single-view novel view synthesis (NVS) is important but challenging in computer vision.
Recent advancements in NVS have leveraged Denoising Diffusion Probabilistic Models (DDPMs) for their exceptional ability to produce high-fidelity images.
We present Light Field Diffusion (LFD), a novel conditional diffusion-based approach that transcends the conventional reliance on camera pose matrices.
arXiv Detail & Related papers (2023-09-20T03:27:06Z) - Variable Radiance Field for Real-World Category-Specific Reconstruction from Single Image [25.44715538841181]
Reconstructing category-specific objects using Neural Radiance Field (NeRF) from a single image is a promising yet challenging task.<n>We propose Variable Radiance Field (VRF), a novel framework capable of efficiently reconstructing category-specific objects.<n>VRF achieves state-of-the-art performance in both reconstruction quality and computational efficiency.
arXiv Detail & Related papers (2023-06-08T12:12:02Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.