Learning to Structure an Image with Few Colors
- URL: http://arxiv.org/abs/2003.07848v2
- Date: Tue, 11 May 2021 07:41:13 GMT
- Title: Learning to Structure an Image with Few Colors
- Authors: Yunzhong Hou, Liang Zheng, Stephen Gould
- Abstract summary: We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
- Score: 59.34619548026885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Color and structure are the two pillars that construct an image. Usually, the
structure is well expressed through a rich spectrum of colors, allowing objects
in an image to be recognized by neural networks. However, under extreme
limitations of color space, the structure tends to vanish, and thus a neural
network might fail to understand the image. Interested in exploring this
interplay between color and structure, we study the scientific problem of
identifying and preserving the most informative image structures while
constraining the color space to just a few bits, such that the resulting image
can be recognized with possibly high accuracy. To this end, we propose a color
quantization network, ColorCNN, which learns to structure the images from the
classification loss in an end-to-end manner. Given a color space size, ColorCNN
quantizes colors in the original image by generating a color index map and an
RGB color palette. Then, this color-quantized image is fed to a pre-trained
task network to evaluate its performance. In our experiment, with only a 1-bit
color space (i.e., two colors), the proposed network achieves 82.1% top-1
accuracy on the CIFAR10 dataset, outperforming traditional color quantization
methods by a large margin. For applications, when encoded with PNG, the
proposed color quantization shows superiority over other image compression
methods in the extremely low bit-rate regime. The code is available at:
https://github.com/hou-yz/color_distillation.
Related papers
- Training Neural Networks on RAW and HDR Images for Restoration Tasks [59.41340420564656]
In this work, we test approaches on three popular image restoration applications: denoising, deblurring, and single-image super-resolution.
Our results indicate that neural networks train significantly better on HDR and RAW images represented in display color spaces.
This small change to the training strategy can bring a very substantial gain in performance, up to 10-15 dB.
arXiv Detail & Related papers (2023-12-06T17:47:16Z) - DDColor: Towards Photo-Realistic Image Colorization via Dual Decoders [19.560271615736212]
DDColor is an end-to-end method with dual decoders for image colorization.
Our approach includes a pixel decoder and a query-based color decoder.
Our two decoders work together to establish correlations between color and multi-scale semantic representations.
arXiv Detail & Related papers (2022-12-22T11:17:57Z) - Name Your Colour For the Task: Artificially Discover Colour Naming via
Colour Quantisation Transformer [62.75343115345667]
We propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining machine recognition on the quantised images.
We observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages.
Our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage.
arXiv Detail & Related papers (2022-12-07T03:39:18Z) - Learning to Structure an Image with Few Colors and Beyond [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure an image in limited color spaces by minimizing the classification loss.
We introduce ColorCNN+, which supports multiple color space size configurations, and addresses the previous issues of poor recognition accuracy and undesirable visual fidelity under large color spaces.
For potential applications, we show that ColorCNNs can be used as image compression methods for network recognition.
arXiv Detail & Related papers (2022-08-17T17:59:15Z) - Influence of Color Spaces for Deep Learning Image Colorization [2.3705923859070217]
Existing colorization methods rely on different color spaces: RGB, YUV, Lab, etc.
In this chapter, we aim to study their influence on the results obtained by training a deep neural network.
We compare the results obtained with the same deep neural network architecture with RGB, YUV and Lab color spaces.
arXiv Detail & Related papers (2022-04-06T14:14:07Z) - TUCaN: Progressively Teaching Colourisation to Capsules [13.50327471049997]
We introduce a novel downsampling upsampling architecture named TUCaN (Tiny UCapsNet)
We pose the problem as a per pixel colour classification task that identifies colours as a bin in a quantized space.
To train the network, in contrast with the standard end to end learning method, we propose the progressive learning scheme to extract the context of objects.
arXiv Detail & Related papers (2021-06-29T08:44:15Z) - Semantic-driven Colorization [78.88814849391352]
Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
arXiv Detail & Related papers (2020-06-13T08:13:30Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.