Astronomical Image Colorization and upscaling with Generative
Adversarial Networks
- URL: http://arxiv.org/abs/2112.13865v1
- Date: Mon, 27 Dec 2021 19:01:20 GMT
- Title: Astronomical Image Colorization and upscaling with Generative
Adversarial Networks
- Authors: Shreyas Kalvankar, Hrushikesh Pandit, Pranav Parwate, Atharva Patil
and Snehal Kamalapur
- Abstract summary: This research aims to provide an automated approach for the problem by focusing on a very specific domain of images, namely astronomical images.
We explore the usage of various models in two different color spaces, RGB and L*a*b.
The model produces visually appealing images which hallucinate high resolution, colorized data in these results which does not exist in the original image.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic colorization of images without human intervention has been a
subject of interest in the machine learning community for a brief period of
time. Assigning color to an image is a highly ill-posed problem because of its
innate nature of possessing very high degrees of freedom; given an image, there
is often no single color-combination that is correct. Besides colorization,
another problem in reconstruction of images is Single Image Super Resolution,
which aims at transforming low resolution images to a higher resolution. This
research aims to provide an automated approach for the problem by focusing on a
very specific domain of images, namely astronomical images, and process them
using Generative Adversarial Networks (GANs). We explore the usage of various
models in two different color spaces, RGB and L*a*b. We use transferred
learning owing to a small data set, using pre-trained ResNet-18 as a backbone,
i.e. encoder for the U-net and fine-tune it further. The model produces
visually appealing images which hallucinate high resolution, colorized data in
these results which does not exist in the original image. We present our
results by evaluating the GANs quantitatively using distance metrics such as L1
distance and L2 distance in each of the color spaces across all channels to
provide a comparative analysis. We use Frechet inception distance (FID) to
compare the distribution of the generated images with the distribution of the
real image to assess the model's performance.
Related papers
- SPDGAN: A Generative Adversarial Network based on SPD Manifold Learning
for Automatic Image Colorization [1.220743263007369]
We propose a fully automatic colorization approach based on Symmetric Positive Definite (SPD) Manifold Learning with a generative adversarial network (SPDGAN)
Our model establishes an adversarial game between two discriminators and a generator. Its goal is to generate fake colorized images without losing color information across layers through residual connections.
arXiv Detail & Related papers (2023-12-21T00:52:01Z) - Incorporating Ensemble and Transfer Learning For An End-To-End
Auto-Colorized Image Detection Model [0.0]
This paper presents a novel approach that combines the advantages of transfer and ensemble learning approaches to help reduce training time and resource requirements.
The proposed model shows promising results, with accuracy ranging from 94.55% to 99.13%.
arXiv Detail & Related papers (2023-09-25T19:22:57Z) - ParaColorizer: Realistic Image Colorization using Parallel Generative
Networks [1.7778609937758327]
Grayscale image colorization is a fascinating application of AI for information restoration.
We present a parallel GAN-based colorization framework.
We show the shortcomings of the non-perceptual evaluation metrics commonly used to assess multi-modal problems.
arXiv Detail & Related papers (2022-08-17T13:49:44Z) - Detecting Recolored Image by Spatial Correlation [60.08643417333974]
Image recoloring is an emerging editing technique that can manipulate the color values of an image to give it a new style.
In this paper, we explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.
Our method achieves the state-of-the-art detection accuracy on multiple benchmark datasets and exhibits well generalization for unknown types of recoloring methods.
arXiv Detail & Related papers (2022-04-23T01:54:06Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - The color out of space: learning self-supervised representations for
Earth Observation imagery [10.019106184219515]
We propose to learn meaningful representations from satellite imagery, leveraging its high-dimensionality spectral bands to reconstruct visible colors.
We conduct experiments on land cover classification (BigEarthNet) and West Nile Virus detection, showing that colorization is a solid pretext task for training a feature extractor.
arXiv Detail & Related papers (2020-06-22T10:21:36Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - Supervised and Unsupervised Learning of Parameterized Color Enhancement [112.88623543850224]
We tackle the problem of color enhancement as an image translation task using both supervised and unsupervised learning.
We achieve state-of-the-art results compared to both supervised (paired data) and unsupervised (unpaired data) image enhancement methods on the MIT-Adobe FiveK benchmark.
We show the generalization capability of our method, by applying it on photos from the early 20th century and to dark video frames.
arXiv Detail & Related papers (2019-12-30T13:57:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.