Semantic-driven Colorization
- URL: http://arxiv.org/abs/2006.07587v3
- Date: Sat, 14 Aug 2021 13:19:14 GMT
- Title: Semantic-driven Colorization
- Authors: Man M. Ho, Lu Zhang, Alexander Raake, Jinjia Zhou
- Abstract summary: Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
- Score: 78.88814849391352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent colorization works implicitly predict the semantic information while
learning to colorize black-and-white images. Consequently, the generated color
is easier to be overflowed, and the semantic faults are invisible. As a human
experience in colorization, our brains first detect and recognize the objects
in the photo, then imagine their plausible colors based on many similar objects
we have seen in real life, and finally colorize them, as described in the
teaser. In this study, we simulate that human-like action to let our network
first learn to understand the photo, then colorize it. Thus, our work can
provide plausible colors at a semantic level. Plus, the semantic information of
the learned model becomes understandable and able to interact. Additionally, we
also prove that Instance Normalization is also a missing ingredient for
colorization, then re-design the inference flow of U-Net to have two streams of
data, providing an appropriate way of normalizing the feature maps from the
black-and-white image and its semantic map. As a result, our network can
provide plausible colors competitive to the typical colorization works for
specific objects.
Related papers
- Audio-Infused Automatic Image Colorization by Exploiting Audio Scene
Semantics [54.980359694044566]
This paper tries to utilize corresponding audio, which naturally contains extra semantic information about the same scene.
Experiments demonstrate that audio guidance can effectively improve the performance of automatic colorization.
arXiv Detail & Related papers (2024-01-24T07:22:05Z) - Towards Photorealistic Colorization by Imagination [48.82757902812846]
We present a novel approach to automatic image colorization by imitating the imagination process of human experts.
Our work produces more colorful and diverse results than state-of-the-art image colorization methods.
arXiv Detail & Related papers (2021-08-20T14:28:37Z) - Assessing The Importance Of Colours For CNNs In Object Recognition [70.70151719764021]
Convolutional neural networks (CNNs) have been shown to exhibit conflicting properties.
We demonstrate that CNNs often rely heavily on colour information while making a prediction.
We evaluate a model trained with congruent images on congruent, greyscale, and incongruent images.
arXiv Detail & Related papers (2020-12-12T22:55:06Z) - Is It a Plausible Colour? UCapsNet for Image Colourisation [38.88087332284959]
We introduce a novel architecture for colourisation of grayscale images.
The architecture is based on Capsules trained following the adversarial learning paradigm.
We show that our approach is able to generate more vibrant and plausible colours than exiting solutions.
arXiv Detail & Related papers (2020-12-04T09:07:13Z) - SCGAN: Saliency Map-guided Colorization with Generative Adversarial
Network [16.906813829260553]
We propose a fully automatic Saliency Map-guided Colorization with Generative Adversarial Network (SCGAN) framework.
It jointly predicts the colorization and saliency map to minimize semantic confusion and color bleeding.
Experimental results show that SCGAN can generate more reasonable colorized images than state-of-the-art techniques.
arXiv Detail & Related papers (2020-11-23T13:06:54Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.