Guided Colorization Using Mono-Color Image Pairs
- URL: http://arxiv.org/abs/2108.07471v1
- Date: Tue, 17 Aug 2021 07:00:28 GMT
- Title: Guided Colorization Using Mono-Color Image Pairs
- Authors: Ze-Hua Sheng, Hui-Liang Shen, Bo-Wen Yao, Huaqi Zhang
- Abstract summary: monochrome images usually have better signal-to-noise ratio (SNR) and richer textures due to its higher quantum efficiency.
We propose a mono-color image enhancement algorithm that colorizes the monochrome image with the color one.
Experimental results show that, our algorithm can efficiently restore color images with higher SNR and richer details from the mono-color image pairs.
- Score: 6.729108277517129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compared to color images captured by conventional RGB cameras, monochrome
images usually have better signal-to-noise ratio (SNR) and richer textures due
to its higher quantum efficiency. It is thus natural to apply a mono-color
dual-camera system to restore color images with higher visual quality. In this
paper, we propose a mono-color image enhancement algorithm that colorizes the
monochrome image with the color one. Based on the assumption that adjacent
structures with similar luminance values are likely to have similar colors, we
first perform dense scribbling to assign colors to the monochrome pixels
through block matching. Two types of outliers, including occlusion and color
ambiguity, are detected and removed from the initial scribbles. We also
introduce a sampling strategy to accelerate the scribbling process. Then, the
dense scribbles are propagated to the entire image. To alleviate incorrect
color propagation in the regions that have no color hints at all, we generate
extra color seeds based on the existed scribbles to guide the propagation
process. Experimental results show that, our algorithm can efficiently restore
color images with higher SNR and richer details from the mono-color image
pairs, and achieves good performance in solving the color bleeding problem.
Related papers
- Multimodal Semantic-Aware Automatic Colorization with Diffusion Prior [15.188673173327658]
We leverage the extraordinary generative ability of the diffusion prior to synthesize color with plausible semantics.
We adopt multimodal high-level semantic priors to help the model understand the image content and deliver saturated colors.
A luminance-aware decoder is designed to restore details and enhance overall visual quality.
arXiv Detail & Related papers (2024-04-25T15:28:22Z) - Colorizing Monochromatic Radiance Fields [55.695149357101755]
We consider reproducing color from monochromatic radiance fields as a representation-prediction task in the Lab color space.
By first constructing the luminance and density representation using monochromatic images, our prediction stage can recreate color representation on the basis of an image colorization module.
We then reproduce a colorful implicit model through the representation of luminance, density, and color.
arXiv Detail & Related papers (2024-02-19T14:47:23Z) - SPDGAN: A Generative Adversarial Network based on SPD Manifold Learning
for Automatic Image Colorization [1.220743263007369]
We propose a fully automatic colorization approach based on Symmetric Positive Definite (SPD) Manifold Learning with a generative adversarial network (SPDGAN)
Our model establishes an adversarial game between two discriminators and a generator. Its goal is to generate fake colorized images without losing color information across layers through residual connections.
arXiv Detail & Related papers (2023-12-21T00:52:01Z) - BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature
Fusion for Deep Exemplar-based Video Colorization [70.14893481468525]
We present an effective BiSTNet to explore colors of reference exemplars and utilize them to help video colorization.
We first establish the semantic correspondence between each frame and the reference exemplars in deep feature space to explore color information from reference exemplars.
We develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process.
arXiv Detail & Related papers (2022-12-05T13:47:15Z) - PalGAN: Image Colorization with Palette Generative Adversarial Networks [51.59276436217957]
We propose a new GAN-based colorization approach PalGAN, integrated with palette estimation and chromatic attention.
PalGAN outperforms state-of-the-arts in quantitative evaluation and visual comparison, delivering notable diverse, contrastive, and edge-preserving appearances.
arXiv Detail & Related papers (2022-10-20T12:28:31Z) - Detecting Recolored Image by Spatial Correlation [60.08643417333974]
Image recoloring is an emerging editing technique that can manipulate the color values of an image to give it a new style.
In this paper, we explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.
Our method achieves the state-of-the-art detection accuracy on multiple benchmark datasets and exhibits well generalization for unknown types of recoloring methods.
arXiv Detail & Related papers (2022-04-23T01:54:06Z) - Semantic-Sparse Colorization Network for Deep Exemplar-based
Colorization [23.301799487207035]
Exemplar-based colorization approaches rely on reference image to provide plausible colors for target gray-scale image.
We propose Semantic-Sparse Colorization Network (SSCN) to transfer both the global image style and semantic-related colors to the gray-scale image.
Our network can perfectly balance the global and local colors while alleviating the ambiguous matching problem.
arXiv Detail & Related papers (2021-12-02T15:35:10Z) - Generative Probabilistic Image Colorization [2.110198946293069]
We propose a diffusion-based generative process that trains a sequence of probabilistic models to reverse each step of noise corruption.
Given a line-drawing image as input, our method suggests multiple candidate colorized images.
Our proposed approach performed well not only on color-conditional image generation tasks, but also on some practical image completion and inpainting tasks.
arXiv Detail & Related papers (2021-09-29T16:10:12Z) - Probabilistic Color Constancy [88.85103410035929]
We define a framework for estimating the illumination of a scene by weighting the contribution of different image regions.
The proposed method achieves competitive performance, compared to the state-of-the-art, on INTEL-TAU dataset.
arXiv Detail & Related papers (2020-05-06T11:03:05Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.