Semantic-Sparse Colorization Network for Deep Exemplar-based
Colorization
- URL: http://arxiv.org/abs/2112.01335v1
- Date: Thu, 2 Dec 2021 15:35:10 GMT
- Title: Semantic-Sparse Colorization Network for Deep Exemplar-based
Colorization
- Authors: Yunpeng Bai, Chao Dong, Zenghao Chai, Andong Wang, Zhengzhuo Xu, Chun
Yuan
- Abstract summary: Exemplar-based colorization approaches rely on reference image to provide plausible colors for target gray-scale image.
We propose Semantic-Sparse Colorization Network (SSCN) to transfer both the global image style and semantic-related colors to the gray-scale image.
Our network can perfectly balance the global and local colors while alleviating the ambiguous matching problem.
- Score: 23.301799487207035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exemplar-based colorization approaches rely on reference image to provide
plausible colors for target gray-scale image. The key and difficulty of
exemplar-based colorization is to establish an accurate correspondence between
these two images. Previous approaches have attempted to construct such a
correspondence but are faced with two obstacles. First, using luminance
channels for the calculation of correspondence is inaccurate. Second, the dense
correspondence they built introduces wrong matching results and increases the
computation burden. To address these two problems, we propose Semantic-Sparse
Colorization Network (SSCN) to transfer both the global image style and
detailed semantic-related colors to the gray-scale image in a coarse-to-fine
manner. Our network can perfectly balance the global and local colors while
alleviating the ambiguous matching problem. Experiments show that our method
outperforms existing methods in both quantitative and qualitative evaluation
and achieves state-of-the-art performance.
Related papers
- SPDGAN: A Generative Adversarial Network based on SPD Manifold Learning
for Automatic Image Colorization [1.220743263007369]
We propose a fully automatic colorization approach based on Symmetric Positive Definite (SPD) Manifold Learning with a generative adversarial network (SPDGAN)
Our model establishes an adversarial game between two discriminators and a generator. Its goal is to generate fake colorized images without losing color information across layers through residual connections.
arXiv Detail & Related papers (2023-12-21T00:52:01Z) - Improved Diffusion-based Image Colorization via Piggybacked Models [19.807766482434563]
We introduce a colorization model piggybacking on the existing powerful T2I diffusion model.
A diffusion guider is designed to incorporate the pre-trained weights of the latent diffusion model.
A lightness-aware VQVAE will then generate the colorized result with pixel-perfect alignment to the given grayscale image.
arXiv Detail & Related papers (2023-04-21T16:23:24Z) - SPColor: Semantic Prior Guided Exemplar-based Image Colorization [14.191819767895867]
We propose SPColor, a semantic prior guided-based image colorization framework.
SPColor first coarsely classifies pixels of the reference and target images to several pseudo-classes under the guidance of semantic prior.
Our model outperforms recent state-of-the-art methods both quantitatively and qualitatively on public dataset.
arXiv Detail & Related papers (2023-04-13T04:21:45Z) - PalGAN: Image Colorization with Palette Generative Adversarial Networks [51.59276436217957]
We propose a new GAN-based colorization approach PalGAN, integrated with palette estimation and chromatic attention.
PalGAN outperforms state-of-the-arts in quantitative evaluation and visual comparison, delivering notable diverse, contrastive, and edge-preserving appearances.
arXiv Detail & Related papers (2022-10-20T12:28:31Z) - TIC: Text-Guided Image Colorization [24.317541784957285]
We propose a novel deep network that takes two inputs (the grayscale image and the respective encoded text description) and tries to predict the relevant color gamut.
As the respective textual descriptions contain color information of the objects present in the scene, the text encoding helps to improve the overall quality of the predicted colors.
We have evaluated our proposed model using different metrics and found that it outperforms the state-of-the-art colorization algorithms both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-08-04T18:40:20Z) - Immiscible Color Flows in Optimal Transport Networks for Image
Classification [68.8204255655161]
We propose a physics-inspired system that adapts Optimal Transport principles to leverage color distributions of images.
Our dynamics regulates immiscible of colors traveling on a network built from images.
Our method outperforms competitor algorithms on image classification tasks in datasets where color information matters.
arXiv Detail & Related papers (2022-05-04T12:41:36Z) - Detecting Recolored Image by Spatial Correlation [60.08643417333974]
Image recoloring is an emerging editing technique that can manipulate the color values of an image to give it a new style.
In this paper, we explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.
Our method achieves the state-of-the-art detection accuracy on multiple benchmark datasets and exhibits well generalization for unknown types of recoloring methods.
arXiv Detail & Related papers (2022-04-23T01:54:06Z) - Color2Style: Real-Time Exemplar-Based Image Colorization with
Self-Reference Learning and Deep Feature Modulation [29.270149925368674]
We present a deep exemplar-based image colorization approach named Color2Style to resurrect grayscale image media by filling them with vibrant colors.
Our method exploits a simple yet effective deep feature modulation (DFM) module, which injects the color embeddings extracted from the reference image into the deep representations of the input grayscale image.
arXiv Detail & Related papers (2021-06-15T10:05:58Z) - Semantic-driven Colorization [78.88814849391352]
Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
arXiv Detail & Related papers (2020-06-13T08:13:30Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.