Name Your Colour For the Task: Artificially Discover Colour Naming via
Colour Quantisation Transformer
- URL: http://arxiv.org/abs/2212.03434v7
- Date: Fri, 13 Oct 2023 06:46:27 GMT
- Title: Name Your Colour For the Task: Artificially Discover Colour Naming via
Colour Quantisation Transformer
- Authors: Shenghan Su and Lin Gu and Yue Yang and Zenghui Zhang and Tatsuya
Harada
- Abstract summary: We propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining machine recognition on the quantised images.
We observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages.
Our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage.
- Score: 62.75343115345667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The long-standing theory that a colour-naming system evolves under dual
pressure of efficient communication and perceptual mechanism is supported by
more and more linguistic studies, including analysing four decades of
diachronic data from the Nafaanra language. This inspires us to explore whether
machine learning could evolve and discover a similar colour-naming system via
optimising the communication efficiency represented by high-level recognition
performance. Here, we propose a novel colour quantisation transformer,
CQFormer, that quantises colour space while maintaining the accuracy of machine
recognition on the quantised images. Given an RGB image, Annotation Branch maps
it into an index map before generating the quantised image with a colour
palette; meanwhile the Palette Branch utilises a key-point detection way to
find proper colours in the palette among the whole colour space. By interacting
with colour annotation, CQFormer is able to balance both the machine vision
accuracy and colour perceptual structure such as distinct and stable colour
distribution for discovered colour system. Very interestingly, we even observe
the consistent evolution pattern between our artificial colour system and basic
colour terms across human languages. Besides, our colour quantisation method
also offers an efficient quantisation method that effectively compresses the
image storage while maintaining high performance in high-level recognition
tasks such as classification and detection. Extensive experiments demonstrate
the superior performance of our method with extremely low bit-rate colours,
showing potential to integrate into quantisation network to quantities from
image to network activation. The source code is available at
https://github.com/ryeocthiv/CQFormer
Related papers
- Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - Cross-Camera Deep Colorization [10.254243409261898]
We propose an end-to-end convolutional neural network to align and fuse images from a color-plus-mono dual-camera system.
Our method consistently achieves substantial improvements, i.e., around 10dB PSNR gain.
arXiv Detail & Related papers (2022-08-26T11:02:14Z) - Learning to Structure an Image with Few Colors and Beyond [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure an image in limited color spaces by minimizing the classification loss.
We introduce ColorCNN+, which supports multiple color space size configurations, and addresses the previous issues of poor recognition accuracy and undesirable visual fidelity under large color spaces.
For potential applications, we show that ColorCNNs can be used as image compression methods for network recognition.
arXiv Detail & Related papers (2022-08-17T17:59:15Z) - Neural Color Operators for Sequential Image Retouching [62.99812889713773]
We propose a novel image retouching method by modeling the retouching process as performing a sequence of newly introduced trainable neural color operators.
The neural color operator mimics the behavior of traditional color operators and learns pixelwise color transformation while its strength is controlled by a scalar.
Our method consistently achieves the best results compared with SOTA methods in both quantitative measures and visual qualities.
arXiv Detail & Related papers (2022-07-17T05:33:19Z) - TUCaN: Progressively Teaching Colourisation to Capsules [13.50327471049997]
We introduce a novel downsampling upsampling architecture named TUCaN (Tiny UCapsNet)
We pose the problem as a per pixel colour classification task that identifies colours as a bin in a quantized space.
To train the network, in contrast with the standard end to end learning method, we propose the progressive learning scheme to extract the context of objects.
arXiv Detail & Related papers (2021-06-29T08:44:15Z) - Is It a Plausible Colour? UCapsNet for Image Colourisation [38.88087332284959]
We introduce a novel architecture for colourisation of grayscale images.
The architecture is based on Capsules trained following the adversarial learning paradigm.
We show that our approach is able to generate more vibrant and plausible colours than exiting solutions.
arXiv Detail & Related papers (2020-12-04T09:07:13Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.