Determination of galaxy photometric redshifts using Conditional Generative Adversarial Networks (CGANs)
- URL: http://arxiv.org/abs/2501.06532v1
- Date: Sat, 11 Jan 2025 12:42:07 GMT
- Title: Determination of galaxy photometric redshifts using Conditional Generative Adversarial Networks (CGANs)
- Authors: M. Garcia-Fernandez,
- Abstract summary: We present a new algorithmic approach for determining photometric redshifts of galaxies using Conditional Generative Adversarial Networks (CGANs)
Proposed CGAN implementation, approaches photometric redshift determination as a probabilistic regression, where instead of determining a single value for the estimated redshift of the galaxy, a full probability density is computed.
- Score: 0.0
- License:
- Abstract: Accurate and reliable photometric redshifts determination is one of the key aspects for wide-field photometric surveys. Determination of photometric redshift for galaxies, has been traditionally solved by use of machine-learning and artificial intelligence techniques trained on a calibration sample of galaxies, where both photometry and spectrometry are determined. On this paper, we present a new algorithmic approach for determining photometric redshifts of galaxies using Conditional Generative Adversarial Networks (CGANs). Proposed CGAN implementation, approaches photometric redshift determination as a probabilistic regression, where instead of determining a single value for the estimated redshift of the galaxy, a full probability density is computed. The methodology proposed, is tested with data from Dark Energy Survey (DES) Y1 data and compared with other existing algorithm such as a Random Forest regressor.
Related papers
- Mantis Shrimp: Exploring Photometric Band Utilization in Computer Vision Networks for Photometric Redshift Estimation [0.30924355683504173]
We present a model for photometric redshift estimation that fuses ultra-violet (GALEX), optical (PanSTARRS), and infrared (UnWISE) imagery.
Mantis Shrimp estimates the conditional density estimate of redshift using cutout images.
We study how the models learn to use information across bands, finding evidence that our models successfully incorporates information from all surveys.
arXiv Detail & Related papers (2025-01-15T19:46:23Z) - CLAP. I. Resolving miscalibration for deep learning-based galaxy photometric redshift estimation [3.611102630303458]
We develop a novel method called the Contrastive Learning and Adaptive KNN for Photometric Redshift (CLAP)
It leverages supervised contrastive learning (SCL) and k-nearest neighbours (KNN) to construct and calibrate raw probability density estimates.
The harmonic mean is adopted to combine an ensemble of estimates from multiple realisations for improving accuracy.
arXiv Detail & Related papers (2024-10-25T08:46:55Z) - Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Photometric Redshifts with Copula Entropy [1.7125489646780319]
Copula entropy (CE) is used to measure the correlations between photometric measurements and redshifts.
The accuracy of photometric redshifts is improved with the selected measurements.
arXiv Detail & Related papers (2023-10-25T13:33:40Z) - Photo-zSNthesis: Converting Type Ia Supernova Lightcurves to Redshift
Estimates via Deep Learning [0.0]
Photo-zSNthesis is a convolutional neural network-based method for predicting full redshift probability distributions.
We show a 61x improvement in prediction bias Delta z> on PLAsTiCC simulations and 5x improvement on real SDSS data.
arXiv Detail & Related papers (2023-05-19T17:59:00Z) - Inferring Structural Parameters of Low-Surface-Brightness-Galaxies with
Uncertainty Quantification using Bayesian Neural Networks [70.80563014913676]
We show that a Bayesian Neural Network (BNN) can be used for the inference, with uncertainty, of such parameters from simulated low-surface-brightness galaxy images.
Compared to traditional profile-fitting methods, we show that the uncertainties obtained using BNNs are comparable in magnitude, well-calibrated, and the point estimates of the parameters are closer to the true values.
arXiv Detail & Related papers (2022-07-07T17:55:26Z) - Photometric Redshift Estimation with Convolutional Neural Networks and
Galaxy Images: A Case Study of Resolving Biases in Data-Driven Methods [0.0]
We investigate two major forms of biases, i.e., class-dependent residuals and mode collapse, in a case study of estimating photometric redshifts.
We propose a set of consecutive steps for resolving the two biases based on CNN models.
Experiments show that our methods possess a better capability in controlling biases compared to benchmark methods.
arXiv Detail & Related papers (2022-02-21T02:59:33Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Single Image Brightening via Multi-Scale Exposure Fusion with Hybrid
Learning [48.890709236564945]
A small ISO and a small exposure time are usually used to capture an image in the back or low light conditions.
In this paper, a single image brightening algorithm is introduced to brighten such an image.
The proposed algorithm includes a unique hybrid learning framework to generate two virtual images with large exposure times.
arXiv Detail & Related papers (2020-07-04T08:23:07Z) - UC-Net: Uncertainty Inspired RGB-D Saliency Detection via Conditional
Variational Autoencoders [81.5490760424213]
We propose the first framework (UCNet) to employ uncertainty for RGB-D saliency detection by learning from the data labeling process.
Inspired by the saliency data labeling process, we propose probabilistic RGB-D saliency detection network.
arXiv Detail & Related papers (2020-04-13T04:12:59Z) - Hyperspectral-Multispectral Image Fusion with Weighted LASSO [68.04032419397677]
We propose an approach for fusing hyperspectral and multispectral images to provide high-quality hyperspectral output.
We demonstrate that the proposed sparse fusion and reconstruction provides quantitatively superior results when compared to existing methods on publicly available images.
arXiv Detail & Related papers (2020-03-15T23:07:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.