Attention-based Stylisation for Exemplar Image Colourisation
- URL: http://arxiv.org/abs/2105.01705v1
- Date: Tue, 4 May 2021 18:56:26 GMT
- Title: Attention-based Stylisation for Exemplar Image Colourisation
- Authors: Marc Gorriz Blanch, Issa Khalifeh, Alan Smeaton, Noel O'Connor, Marta
Mrak
- Abstract summary: This work reformulates the existing methodology introducing a novel end-to-end colourisation network.
The proposed architecture integrates attention modules at different resolutions that learn how to perform the style transfer task.
Experimental validations demonstrate efficiency of the proposed methodology which generates high quality and visual appealing colourisation.
- Score: 3.491870689686827
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Exemplar-based colourisation aims to add plausible colours to a grayscale
image using the guidance of a colour reference image. Most of the existing
methods tackle the task as a style transfer problem, using a convolutional
neural network (CNN) to obtain deep representations of the content of both
inputs. Stylised outputs are then obtained by computing similarities between
both feature representations in order to transfer the style of the reference to
the content of the target input. However, in order to gain robustness towards
dissimilar references, the stylised outputs need to be refined with a second
colourisation network, which significantly increases the overall system
complexity. This work reformulates the existing methodology introducing a novel
end-to-end colourisation network that unifies the feature matching with the
colourisation process. The proposed architecture integrates attention modules
at different resolutions that learn how to perform the style transfer task in
an unsupervised way towards decoding realistic colour predictions. Moreover,
axial attention is proposed to simplify the attention operations and to obtain
a fast but robust cost-effective architecture. Experimental validations
demonstrate efficiency of the proposed methodology which generates high quality
and visual appealing colourisation. Furthermore, the complexity of the proposed
methodology is reduced compared to the state-of-the-art methods.
Related papers
- Transforming Color: A Novel Image Colorization Method [8.041659727964305]
This paper introduces a novel method for image colorization that utilizes a color transformer and generative adversarial networks (GANs)
The proposed method integrates a transformer architecture to capture global information and a GAN framework to improve visual quality.
Experimental results show that the proposed network significantly outperforms other state-of-the-art colorization techniques.
arXiv Detail & Related papers (2024-10-07T07:23:42Z) - MRStyle: A Unified Framework for Color Style Transfer with Multi-Modality Reference [32.64957647390327]
We introduce MRStyle, a framework that enables color style transfer using multi-modality reference, including image and text.
For text reference, we align the text feature of stable diffusion priors with the style feature of our IRStyle to perform text-guided color style transfer (TRStyle)
Our TRStyle method is highly efficient in both training and inference, producing notable open-set text-guided transfer results.
arXiv Detail & Related papers (2024-09-09T00:01:48Z) - ZePo: Zero-Shot Portrait Stylization with Faster Sampling [61.14140480095604]
This paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps.
We propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control.
arXiv Detail & Related papers (2024-08-10T08:53:41Z) - Diffusing Colors: Image Colorization with Text Guided Diffusion [11.727899027933466]
We present a novel image colorization framework that utilizes image diffusion techniques with granular text prompts.
Our method provides a balance between automation and control, outperforming existing techniques in terms of visual quality and semantic coherence.
Our approach holds potential particularly for color enhancement and historical image colorization.
arXiv Detail & Related papers (2023-12-07T08:59:20Z) - Layered Rendering Diffusion Model for Zero-Shot Guided Image Synthesis [60.260724486834164]
This paper introduces innovative solutions to enhance spatial controllability in diffusion models reliant on text queries.
We present two key innovations: Vision Guidance and the Layered Rendering Diffusion framework.
We apply our method to three practical applications: bounding box-to-image, semantic mask-to-image and image editing.
arXiv Detail & Related papers (2023-11-30T10:36:19Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Semantic-Sparse Colorization Network for Deep Exemplar-based
Colorization [23.301799487207035]
Exemplar-based colorization approaches rely on reference image to provide plausible colors for target gray-scale image.
We propose Semantic-Sparse Colorization Network (SSCN) to transfer both the global image style and semantic-related colors to the gray-scale image.
Our network can perfectly balance the global and local colors while alleviating the ambiguous matching problem.
arXiv Detail & Related papers (2021-12-02T15:35:10Z) - TUCaN: Progressively Teaching Colourisation to Capsules [13.50327471049997]
We introduce a novel downsampling upsampling architecture named TUCaN (Tiny UCapsNet)
We pose the problem as a per pixel colour classification task that identifies colours as a bin in a quantized space.
To train the network, in contrast with the standard end to end learning method, we propose the progressive learning scheme to extract the context of objects.
arXiv Detail & Related papers (2021-06-29T08:44:15Z) - Semantic Layout Manipulation with High-Resolution Sparse Attention [106.59650698907953]
We tackle the problem of semantic image layout manipulation, which aims to manipulate an input image by editing its semantic label map.
A core problem of this task is how to transfer visual details from the input images to the new semantic layout while making the resulting image visually realistic.
We propose a high-resolution sparse attention module that effectively transfers visual details to new layouts at a resolution up to 512x512.
arXiv Detail & Related papers (2020-12-14T06:50:43Z) - Interpretable Detail-Fidelity Attention Network for Single Image
Super-Resolution [89.1947690981471]
We propose a purposeful and interpretable detail-fidelity attention network to progressively process smoothes and details in divide-and-conquer manner.
Particularly, we propose a Hessian filtering for interpretable feature representation which is high-profile for detail inference.
Experiments demonstrate that the proposed methods achieve superior performances over the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-28T08:31:23Z) - TSIT: A Simple and Versatile Framework for Image-to-Image Translation [103.92203013154403]
We introduce a simple and versatile framework for image-to-image translation.
We provide a carefully designed two-stream generative model with newly proposed feature transformations.
This allows multi-scale semantic structure information and style representation to be effectively captured and fused by the network.
A systematic study compares the proposed method with several state-of-the-art task-specific baselines, verifying its effectiveness in both perceptual quality and quantitative evaluations.
arXiv Detail & Related papers (2020-07-23T15:34:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.