Learning to Structure an Image with Few Colors and Beyond
- URL: http://arxiv.org/abs/2208.08438v1
- Date: Wed, 17 Aug 2022 17:59:15 GMT
- Title: Learning to Structure an Image with Few Colors and Beyond
- Authors: Yunzhong Hou, Liang Zheng, Stephen Gould
- Abstract summary: We propose a color quantization network, ColorCNN, which learns to structure an image in limited color spaces by minimizing the classification loss.
We introduce ColorCNN+, which supports multiple color space size configurations, and addresses the previous issues of poor recognition accuracy and undesirable visual fidelity under large color spaces.
For potential applications, we show that ColorCNNs can be used as image compression methods for network recognition.
- Score: 59.34619548026885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Color and structure are the two pillars that combine to give an image its
meaning. Interested in critical structures for neural network recognition, we
isolate the influence of colors by limiting the color space to just a few bits,
and find structures that enable network recognition under such constraints. To
this end, we propose a color quantization network, ColorCNN, which learns to
structure an image in limited color spaces by minimizing the classification
loss. Building upon the architecture and insights of ColorCNN, we introduce
ColorCNN+, which supports multiple color space size configurations, and
addresses the previous issues of poor recognition accuracy and undesirable
visual fidelity under large color spaces. Via a novel imitation learning
approach, ColorCNN+ learns to cluster colors like traditional color
quantization methods. This reduces overfitting and helps both visual fidelity
and recognition accuracy under large color spaces. Experiments verify that
ColorCNN+ achieves very competitive results under most circumstances,
preserving both key structures for network recognition and visual fidelity with
accurate colors. We further discuss differences between key structures and
accurate colors, and their specific contributions to network recognition. For
potential applications, we show that ColorCNNs can be used as image compression
methods for network recognition.
Related papers
- Color Equivariant Convolutional Networks [50.655443383582124]
CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions.
We propose Color Equivariant Convolutions ( CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum.
We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts.
arXiv Detail & Related papers (2023-10-30T09:18:49Z) - Name Your Colour For the Task: Artificially Discover Colour Naming via
Colour Quantisation Transformer [62.75343115345667]
We propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining machine recognition on the quantised images.
We observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages.
Our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage.
arXiv Detail & Related papers (2022-12-07T03:39:18Z) - Influence of Color Spaces for Deep Learning Image Colorization [2.3705923859070217]
Existing colorization methods rely on different color spaces: RGB, YUV, Lab, etc.
In this chapter, we aim to study their influence on the results obtained by training a deep neural network.
We compare the results obtained with the same deep neural network architecture with RGB, YUV and Lab color spaces.
arXiv Detail & Related papers (2022-04-06T14:14:07Z) - Assessing The Importance Of Colours For CNNs In Object Recognition [70.70151719764021]
Convolutional neural networks (CNNs) have been shown to exhibit conflicting properties.
We demonstrate that CNNs often rely heavily on colour information while making a prediction.
We evaluate a model trained with congruent images on congruent, greyscale, and incongruent images.
arXiv Detail & Related papers (2020-12-12T22:55:06Z) - How Convolutional Neural Network Architecture Biases Learned Opponency
and Colour Tuning [1.0742675209112622]
Recent work suggests that changing Convolutional Neural Network (CNN) architecture by introducing a bottleneck in the second layer can yield changes in learned function.
To understand this relationship fully requires a way of quantitatively comparing trained networks.
We propose an approach to obtaining spatial and colour tuning curves for convolutional neurons.
arXiv Detail & Related papers (2020-10-06T11:33:48Z) - Semantic-driven Colorization [78.88814849391352]
Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
arXiv Detail & Related papers (2020-06-13T08:13:30Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.