Towards Photorealistic Colorization by Imagination
- URL: http://arxiv.org/abs/2108.09195v1
- Date: Fri, 20 Aug 2021 14:28:37 GMT
- Title: Towards Photorealistic Colorization by Imagination
- Authors: Chenyang Lei and Yue Wu and Qifeng Chen
- Abstract summary: We present a novel approach to automatic image colorization by imitating the imagination process of human experts.
Our work produces more colorful and diverse results than state-of-the-art image colorization methods.
- Score: 48.82757902812846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel approach to automatic image colorization by imitating the
imagination process of human experts. Our imagination module is designed to
generate color images that are context-correlated with black-and-white photos.
Given a black-and-white image, our imagination module firstly extracts the
context information, which is then used to synthesize colorful and diverse
images using a conditional image synthesis network (e.g., semantic image
synthesis model). We then design a colorization module to colorize the
black-and-white images with the guidance of imagination for photorealistic
colorization. Experimental results show that our work produces more colorful
and diverse results than state-of-the-art image colorization methods. Our
source codes will be publicly available.
Related papers
- Transforming Color: A Novel Image Colorization Method [8.041659727964305]
This paper introduces a novel method for image colorization that utilizes a color transformer and generative adversarial networks (GANs)
The proposed method integrates a transformer architecture to capture global information and a GAN framework to improve visual quality.
Experimental results show that the proposed network significantly outperforms other state-of-the-art colorization techniques.
arXiv Detail & Related papers (2024-10-07T07:23:42Z) - MultiColor: Image Colorization by Learning from Multiple Color Spaces [4.738828630428634]
MultiColor is a new learning-based approach to automatically colorize grayscale images.
We employ a set of dedicated colorization modules for individual color space.
With these predicted color channels representing various color spaces, a complementary network is designed to exploit the complementarity and generate pleasing and reasonable colorized images.
arXiv Detail & Related papers (2024-08-08T02:34:41Z) - Automatic Controllable Colorization via Imagination [55.489416987587305]
We propose a framework for automatic colorization that allows for iterative editing and modifications.
By understanding the content within a grayscale image, we utilize a pre-trained image generation model to generate multiple images that contain the same content.
These images serve as references for coloring, mimicking the process of human experts.
arXiv Detail & Related papers (2024-04-08T16:46:07Z) - Colorizing Monochromatic Radiance Fields [55.695149357101755]
We consider reproducing color from monochromatic radiance fields as a representation-prediction task in the Lab color space.
By first constructing the luminance and density representation using monochromatic images, our prediction stage can recreate color representation on the basis of an image colorization module.
We then reproduce a colorful implicit model through the representation of luminance, density, and color.
arXiv Detail & Related papers (2024-02-19T14:47:23Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - MMC: Multi-Modal Colorization of Images using Textual Descriptions [22.666387184216678]
We propose a deep network that takes two inputs (grayscale image and the respective encoded text description) and tries to predict the relevant color components.
Also, we have predicted each object in the image and have colorized them with their individual description to incorporate their specific attributes in the colorization process.
In terms of performance, the proposed method outperforms existing colorization techniques in terms of LPIPS, PSNR and SSIM metrics.
arXiv Detail & Related papers (2023-04-24T10:53:13Z) - Semantic-driven Colorization [78.88814849391352]
Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
arXiv Detail & Related papers (2020-06-13T08:13:30Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.