SPDGAN: A Generative Adversarial Network based on SPD Manifold Learning
for Automatic Image Colorization
- URL: http://arxiv.org/abs/2312.13506v1
- Date: Thu, 21 Dec 2023 00:52:01 GMT
- Title: SPDGAN: A Generative Adversarial Network based on SPD Manifold Learning
for Automatic Image Colorization
- Authors: Youssef Mourchid, Marc Donias, Yannick Berthoumieu and Mohamed Najim
- Abstract summary: We propose a fully automatic colorization approach based on Symmetric Positive Definite (SPD) Manifold Learning with a generative adversarial network (SPDGAN)
Our model establishes an adversarial game between two discriminators and a generator. Its goal is to generate fake colorized images without losing color information across layers through residual connections.
- Score: 1.220743263007369
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper addresses the automatic colorization problem, which converts a
gray-scale image to a colorized one. Recent deep-learning approaches can
colorize automatically grayscale images. However, when it comes to different
scenes which contain distinct color styles, it is difficult to accurately
capture the color characteristics. In this work, we propose a fully automatic
colorization approach based on Symmetric Positive Definite (SPD) Manifold
Learning with a generative adversarial network (SPDGAN) that improves the
quality of the colorization results. Our SPDGAN model establishes an
adversarial game between two discriminators and a generator. The latter is
based on ResNet architecture with few alterations. Its goal is to generate fake
colorized images without losing color information across layers through
residual connections. Then, we employ two discriminators from different
domains. The first one is devoted to the image pixel domain, while the second
one is to the Riemann manifold domain which helps to avoid color misalignment.
Extensive experiments are conducted on the Places365 and COCO-stuff databases
to test the effect of each component of our SPDGAN. In addition, quantitative
and qualitative comparisons with state-of-the-art methods demonstrate the
effectiveness of our model by achieving more realistic colorized images with
less artifacts visually, and good results of PSNR, SSIM, and FID values.
Related papers
- Incorporating Ensemble and Transfer Learning For An End-To-End
Auto-Colorized Image Detection Model [0.0]
This paper presents a novel approach that combines the advantages of transfer and ensemble learning approaches to help reduce training time and resource requirements.
The proposed model shows promising results, with accuracy ranging from 94.55% to 99.13%.
arXiv Detail & Related papers (2023-09-25T19:22:57Z) - DARC: Distribution-Aware Re-Coloring Model for Generalizable Nucleus
Segmentation [68.43628183890007]
We argue that domain gaps can also be caused by different foreground (nucleus)-background ratios.
First, we introduce a re-coloring method that relieves dramatic image color variations between different domains.
Second, we propose a new instance normalization method that is robust to the variation in the foreground-background ratios.
arXiv Detail & Related papers (2023-09-01T01:01:13Z) - Improved Diffusion-based Image Colorization via Piggybacked Models [19.807766482434563]
We introduce a colorization model piggybacking on the existing powerful T2I diffusion model.
A diffusion guider is designed to incorporate the pre-trained weights of the latent diffusion model.
A lightness-aware VQVAE will then generate the colorized result with pixel-perfect alignment to the given grayscale image.
arXiv Detail & Related papers (2023-04-21T16:23:24Z) - Detecting Recolored Image by Spatial Correlation [60.08643417333974]
Image recoloring is an emerging editing technique that can manipulate the color values of an image to give it a new style.
In this paper, we explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.
Our method achieves the state-of-the-art detection accuracy on multiple benchmark datasets and exhibits well generalization for unknown types of recoloring methods.
arXiv Detail & Related papers (2022-04-23T01:54:06Z) - Transform your Smartphone into a DSLR Camera: Learning the ISP in the
Wild [159.71025525493354]
We propose a trainable Image Signal Processing framework that produces DSLR quality images given RAW images captured by a smartphone.
To address the color misalignments between training image pairs, we employ a color-conditional ISP network and optimize a novel parametric color mapping between each input RAW and reference DSLR image.
arXiv Detail & Related papers (2022-03-20T20:13:59Z) - Astronomical Image Colorization and upscaling with Generative
Adversarial Networks [0.0]
This research aims to provide an automated approach for the problem by focusing on a very specific domain of images, namely astronomical images.
We explore the usage of various models in two different color spaces, RGB and L*a*b.
The model produces visually appealing images which hallucinate high resolution, colorized data in these results which does not exist in the original image.
arXiv Detail & Related papers (2021-12-27T19:01:20Z) - Semantic-Sparse Colorization Network for Deep Exemplar-based
Colorization [23.301799487207035]
Exemplar-based colorization approaches rely on reference image to provide plausible colors for target gray-scale image.
We propose Semantic-Sparse Colorization Network (SSCN) to transfer both the global image style and semantic-related colors to the gray-scale image.
Our network can perfectly balance the global and local colors while alleviating the ambiguous matching problem.
arXiv Detail & Related papers (2021-12-02T15:35:10Z) - HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color
Histograms [52.77252727786091]
HistoGAN is a color histogram-based method for controlling GAN-generated images' colors.
We show how to expand HistoGAN to recolor real images.
arXiv Detail & Related papers (2020-11-23T21:14:19Z) - SCGAN: Saliency Map-guided Colorization with Generative Adversarial
Network [16.906813829260553]
We propose a fully automatic Saliency Map-guided Colorization with Generative Adversarial Network (SCGAN) framework.
It jointly predicts the colorization and saliency map to minimize semantic confusion and color bleeding.
Experimental results show that SCGAN can generate more reasonable colorized images than state-of-the-art techniques.
arXiv Detail & Related papers (2020-11-23T13:06:54Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.