Joint Intensity-Gradient Guided Generative Modeling for Colorization
- URL: http://arxiv.org/abs/2012.14130v1
- Date: Mon, 28 Dec 2020 07:52:55 GMT
- Title: Joint Intensity-Gradient Guided Generative Modeling for Colorization
- Authors: Kai Hong, Jin Li, Wanyun Li, Cailian Yang, Minghui Zhang, Yuhao Wang
and Qiegen Liu
- Abstract summary: This paper proposes an iterative generative model for solving the automatic colorization problem.
Joint intensity-gradient constraint in data-fidelity term is proposed to limit the degree of freedom within generative model.
Experiments demonstrated that the system outperformed state-of-the-art methods whether in quantitative comparisons or user study.
- Score: 16.89777347891486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes an iterative generative model for solving the automatic
colorization problem. Although previous researches have shown the capability to
generate plausible color, the edge color overflow and the requirement of the
reference images still exist. The starting point of the unsupervised learning
in this study is the observation that the gradient map possesses latent
information of the image. Therefore, the inference process of the generative
modeling is conducted in joint intensity-gradient domain. Specifically, a set
of intensity-gradient formed high-dimensional tensors, as the network input,
are used to train a powerful noise conditional score network at the training
phase. Furthermore, the joint intensity-gradient constraint in data-fidelity
term is proposed to limit the degree of freedom within generative model at the
iterative colorization stage, and it is conducive to edge-preserving. Extensive
experiments demonstrated that the system outperformed state-of-the-art methods
whether in quantitative comparisons or user study.
Related papers
- PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference [62.72779589895124]
We make the first attempt to align diffusion models for image inpainting with human aesthetic standards via a reinforcement learning framework.
We train a reward model with a dataset we construct, consisting of nearly 51,000 images annotated with human preferences.
Experiments on inpainting comparison and downstream tasks, such as image extension and 3D reconstruction, demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-10-29T11:49:39Z) - Multi-view Disparity Estimation Using a Novel Gradient Consistency Model [0.0]
This paper proposes the use of Gradient Consistency information to assess the validity of the linearisation.
This information is used to determine the weights applied to the data term as part of an analytically inspired Gradient Consistency Model.
We show that the Gradient Consistency Model outperforms standard coarse-to-fine schemes.
arXiv Detail & Related papers (2024-05-27T10:30:59Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - Gradpaint: Gradient-Guided Inpainting with Diffusion Models [71.47496445507862]
Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved remarkable results in conditional and unconditional image generation.
We present GradPaint, which steers the generation towards a globally coherent image.
We generalizes well to diffusion models trained on various datasets, improving upon current state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2023-09-18T09:36:24Z) - Auto-regressive Image Synthesis with Integrated Quantization [55.51231796778219]
This paper presents a versatile framework for conditional image generation.
It incorporates the inductive bias of CNNs and powerful sequence modeling of auto-regression.
Our method achieves superior diverse image generation performance as compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-21T22:19:17Z) - Gradient-Based Adversarial and Out-of-Distribution Detection [15.510581400494207]
We introduce confounding labels in gradient generation to probe the effective expressivity of neural networks.
We show that our gradient-based approach allows for capturing the anomaly in inputs based on the effective expressivity of the models.
arXiv Detail & Related papers (2022-06-16T15:50:41Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - On Training Implicit Models [75.20173180996501]
We propose a novel gradient estimate for implicit models, named phantom gradient, that forgoes the costly computation of the exact gradient.
Experiments on large-scale tasks demonstrate that these lightweight phantom gradients significantly accelerate the backward passes in training implicit models by roughly 1.7 times.
arXiv Detail & Related papers (2021-11-09T14:40:24Z) - High-dimensional Assisted Generative Model for Color Image Restoration [12.459091135428885]
This work presents an unsupervised deep learning scheme that exploits high-dimensional assisted score-based generative model for color image restoration tasks.
Considering the sample number and internal dimension in score-based generative model, two different high-dimensional ways are proposed: The channel-copy transformation increases the sample number and the pixel-scale transformation decreases feasible dimension space.
To alleviate the difficulty of learning high-dimensional representation, a progressive strategy is proposed to leverage the performance.
arXiv Detail & Related papers (2021-08-14T04:05:29Z) - Wavelet Transform-assisted Adaptive Generative Modeling for Colorization [15.814591440291652]
This study presents a novel scheme that exploiting the score-based generative model in wavelet domain to address the issue.
By taking advantage of the multi-scale and multi-channel representation via wavelet transform, the proposed model learns the priors from stacked wavelet coefficient components.
Experiments demonstrated remarkable improvements of the proposed model on colorization quality, particularly on colorization robustness and diversity.
arXiv Detail & Related papers (2021-07-09T07:12:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.