Computer-aided Colorization State-of-the-science: A Survey
- URL: http://arxiv.org/abs/2410.02288v1
- Date: Thu, 3 Oct 2024 08:13:26 GMT
- Title: Computer-aided Colorization State-of-the-science: A Survey
- Authors: Yu Cao, Xin Duan, Xiangqiao Meng, P. Y. Mok, Ping Li, Tong-Yee Lee,
- Abstract summary: This paper reviews published research in the field of computer-aided colorization technology.
We argue that the colorization task originates from computer graphics, prospers by introducing computer vision, and tends to the fusion of vision and graphics.
- Score: 18.15986565500203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper reviews published research in the field of computer-aided colorization technology. We argue that the colorization task originates from computer graphics, prospers by introducing computer vision, and tends to the fusion of vision and graphics, so we put forward our taxonomy and organize the whole paper chronologically. We extend the existing reconstruction-based colorization evaluation techniques, considering that aesthetic assessment of colored images should be introduced to ensure that colorization satisfies human visual-related requirements and emotions more closely. We perform the colorization aesthetic assessment on seven representative unconditional colorization models and discuss the difference between our assessment and the existing reconstruction-based metrics. Finally, this paper identifies unresolved issues and proposes fruitful areas for future research and development. Access to the project associated with this survey can be obtained at https://github.com/DanielCho-HK/Colorization.
Related papers
- Automatic Controllable Colorization via Imagination [55.489416987587305]
We propose a framework for automatic colorization that allows for iterative editing and modifications.
By understanding the content within a grayscale image, we utilize a pre-trained image generation model to generate multiple images that contain the same content.
These images serve as references for coloring, mimicking the process of human experts.
arXiv Detail & Related papers (2024-04-08T16:46:07Z) - Deep Learning-based Image and Video Inpainting: A Survey [47.53641171826598]
This paper comprehensively reviews the deep learning-based methods for image and video inpainting.
We sort existing methods into different categories from the perspective of their high-level inpainting pipeline.
We present evaluation metrics for low-level pixel and high-level perceptional similarity, conduct a performance evaluation, and discuss the strengths and weaknesses of representative inpainting methods.
arXiv Detail & Related papers (2024-01-07T05:50:12Z) - ColorSense: A Study on Color Vision in Machine Visual Recognition [57.916512479603064]
We collect 110,000 non-trivial human annotations of foreground and background color labels from visual recognition benchmarks.
We validate the use of our datasets by demonstrating that the level of color discrimination has a dominating effect on the performance of machine perception models.
Our findings suggest that object recognition tasks such as classification and localization are susceptible to color vision bias.
arXiv Detail & Related papers (2022-12-16T18:51:41Z) - ABANICCO: A New Color Space for Multi-Label Pixel Classification and
Color Segmentation [1.7205106391379026]
We propose a novel method combining geometric analysis of color theory, fuzzy color spaces, and multi-label systems for the automatic classification of pixels according to 12 standard color categories.
We present a robust, unsupervised, unbiased strategy for color naming based on statistics and color theory.
arXiv Detail & Related papers (2022-11-15T19:26:51Z) - TIC: Text-Guided Image Colorization [24.317541784957285]
We propose a novel deep network that takes two inputs (the grayscale image and the respective encoded text description) and tries to predict the relevant color gamut.
As the respective textual descriptions contain color information of the objects present in the scene, the text encoding helps to improve the overall quality of the predicted colors.
We have evaluated our proposed model using different metrics and found that it outperforms the state-of-the-art colorization algorithms both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-08-04T18:40:20Z) - Generating Compositional Color Representations from Text [3.141061579698638]
Motivated by the fact that a significant fraction of user queries on an image search engine follow an (attribute, object) structure, we propose a generative adversarial network that generates color profiles for such bigrams.
We design our pipeline to learn composition - the ability to combine seen attributes and objects to unseen pairs.
arXiv Detail & Related papers (2021-09-22T01:37:13Z) - Towards Photorealistic Colorization by Imagination [48.82757902812846]
We present a novel approach to automatic image colorization by imitating the imagination process of human experts.
Our work produces more colorful and diverse results than state-of-the-art image colorization methods.
arXiv Detail & Related papers (2021-08-20T14:28:37Z) - Image Colorization: A Survey and Dataset [94.59768013860668]
This article presents a comprehensive survey of state-of-the-art deep learning-based image colorization techniques.
It categorizes the existing colorization techniques into seven classes and discusses important factors governing their performance.
We perform an extensive experimental evaluation of existing image colorization methods using both existing datasets and our proposed one.
arXiv Detail & Related papers (2020-08-25T01:22:52Z) - Semantic-driven Colorization [78.88814849391352]
Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
arXiv Detail & Related papers (2020-06-13T08:13:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.