Is It a Plausible Colour? UCapsNet for Image Colourisation
- URL: http://arxiv.org/abs/2012.02478v1
- Date: Fri, 4 Dec 2020 09:07:13 GMT
- Title: Is It a Plausible Colour? UCapsNet for Image Colourisation
- Authors: Rita Pucci, Christian Micheloni, Gian Luca Foresti, Niki Martinel
- Abstract summary: We introduce a novel architecture for colourisation of grayscale images.
The architecture is based on Capsules trained following the adversarial learning paradigm.
We show that our approach is able to generate more vibrant and plausible colours than exiting solutions.
- Score: 38.88087332284959
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human beings can imagine the colours of a grayscale image with no particular
effort thanks to their ability of semantic feature extraction. Can an
autonomous system achieve that? Can it hallucinate plausible and vibrant
colours? This is the colourisation problem. Different from existing works
relying on convolutional neural network models pre-trained with supervision, we
cast such colourisation problem as a self-supervised learning task. We tackle
the problem with the introduction of a novel architecture based on Capsules
trained following the adversarial learning paradigm. Capsule networks are able
to extract a semantic representation of the entities in the image but loose
details about their spatial information, which is important for colourising a
grayscale image. Thus our UCapsNet structure comes with an encoding phase that
extracts entities through capsules and spatial details through convolutional
neural networks. A decoding phase merges the entity features with the spatial
features to hallucinate a plausible colour version of the input datum. Results
on the ImageNet benchmark show that our approach is able to generate more
vibrant and plausible colours than exiting solutions and achieves superior
performance than models pre-trained with supervision.
Related papers
- Name Your Colour For the Task: Artificially Discover Colour Naming via
Colour Quantisation Transformer [62.75343115345667]
We propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining machine recognition on the quantised images.
We observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages.
Our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage.
arXiv Detail & Related papers (2022-12-07T03:39:18Z) - Learning to Structure an Image with Few Colors and Beyond [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure an image in limited color spaces by minimizing the classification loss.
We introduce ColorCNN+, which supports multiple color space size configurations, and addresses the previous issues of poor recognition accuracy and undesirable visual fidelity under large color spaces.
For potential applications, we show that ColorCNNs can be used as image compression methods for network recognition.
arXiv Detail & Related papers (2022-08-17T17:59:15Z) - Collaboration among Image and Object Level Features for Image
Colourisation [25.60139324272782]
Image colourisation is an ill-posed problem, with multiple correct solutions which depend on the context and object instances present in the input datum.
Previous approaches attacked the problem either by requiring intense user interactions or by exploiting the ability of convolutional neural networks (CNNs) in learning image level (context) features.
We propose a single network, named UCapsNet, that separate image-level features obtained through convolutions and object-level features captured by means of capsules.
Then, by skip connections over different layers, we enforce collaboration between such disentangling factors to produce high quality and plausible image colourisation.
arXiv Detail & Related papers (2021-01-19T11:48:12Z) - The color out of space: learning self-supervised representations for
Earth Observation imagery [10.019106184219515]
We propose to learn meaningful representations from satellite imagery, leveraging its high-dimensionality spectral bands to reconstruct visible colors.
We conduct experiments on land cover classification (BigEarthNet) and West Nile Virus detection, showing that colorization is a solid pretext task for training a feature extractor.
arXiv Detail & Related papers (2020-06-22T10:21:36Z) - Semantic-driven Colorization [78.88814849391352]
Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
arXiv Detail & Related papers (2020-06-13T08:13:30Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.