XCI-Sketch: Extraction of Color Information from Images for Generation
of Colored Outlines and Sketches
- URL: http://arxiv.org/abs/2108.11554v1
- Date: Thu, 26 Aug 2021 02:27:55 GMT
- Title: XCI-Sketch: Extraction of Color Information from Images for Generation
of Colored Outlines and Sketches
- Authors: Harsh Rathod, Manisimha Varma, Parna Chowdhury, Sameer Saxena, V
Manushree, Ankita Ghosh, Sahil Khose
- Abstract summary: We propose two methods to mimic human-drawn colored sketches.
The first method renders colored outline sketches by applying image processing techniques aided by k-means color clustering.
The second method uses a generative adversarial network to develop a model that can generate colored sketches from previously unobserved images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sketches are a medium to convey a visual scene from an individual's creative
perspective. The addition of color substantially enhances the overall
expressivity of a sketch. This paper proposes two methods to mimic human-drawn
colored sketches by utilizing the Contour Drawing Dataset. Our first approach
renders colored outline sketches by applying image processing techniques aided
by k-means color clustering. The second method uses a generative adversarial
network to develop a model that can generate colored sketches from previously
unobserved images. We assess the results obtained through quantitative and
qualitative evaluations.
Related papers
- Paint Bucket Colorization Using Anime Character Color Design Sheets [72.66788521378864]
We introduce inclusion matching, which allows the network to understand the relationships between segments.
Our network's training pipeline significantly improves performance in both colorization and consecutive frame colorization.
To support our network's training, we have developed a unique dataset named PaintBucket-Character.
arXiv Detail & Related papers (2024-10-25T09:33:27Z) - Semi-supervised reference-based sketch extraction using a contrastive learning framework [6.20476217797034]
We propose a novel multi-modal sketch extraction method that can imitate the style of a given reference sketch with unpaired data training.
Our method outperforms state-of-the-art sketch extraction methods and unpaired image translation methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2024-07-19T04:51:34Z) - ColorizeDiffusion: Adjustable Sketch Colorization with Reference Image and Text [5.675944597452309]
We introduce two variations of an image-guided latent diffusion model utilizing different image tokens from the pre-trained CLIP image encoder.
We propose corresponding manipulation methods to adjust their results sequentially using weighted text inputs.
arXiv Detail & Related papers (2024-01-02T22:46:12Z) - Towards Interactive Image Inpainting via Sketch Refinement [13.34066589008464]
We propose a two-stage image inpainting method termed SketchRefiner.
In the first stage, we propose using a cross-correlation loss function to robustly calibrate and refine the user-provided sketches.
In the second stage, we learn to extract informative features from the abstracted sketches in the feature space and modulate the inpainting process.
arXiv Detail & Related papers (2023-06-01T07:15:54Z) - Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings [99.9788496281408]
We study how sketches can be used as a weak label to detect salient objects present in an image.
To accomplish this, we introduce a photo-to-sketch generation model that aims to generate sequential sketch coordinates corresponding to a given visual photo.
Tests prove our hypothesis and delineate how our sketch-based saliency detection model gives a competitive performance compared to the state-of-the-art.
arXiv Detail & Related papers (2023-03-20T23:46:46Z) - Sketch-Guided Scenery Image Outpainting [83.6612152173028]
We propose an encoder-decoder based network to conduct sketch-guided outpainting.
We apply a holistic alignment module to make the synthesized part be similar to the real one from the global view.
Second, we reversely produce the sketches from the synthesized part and encourage them be consistent with the ground-truth ones.
arXiv Detail & Related papers (2020-06-17T11:34:36Z) - Reference-Based Sketch Image Colorization using Augmented-Self Reference
and Dense Semantic Correspondence [32.848390767305276]
This paper tackles the automatic colorization task of a sketch image given an already-colored reference image.
We utilize the identical image with geometric distortion as a virtual reference, which makes it possible to secure the ground truth for a colored output image.
arXiv Detail & Related papers (2020-05-11T15:52:50Z) - SketchyCOCO: Image Generation from Freehand Scene Sketches [71.85577739612579]
We introduce the first method for automatic image generation from scene-level freehand sketches.
Key contribution is an attribute vector bridged Geneversarative Adrial Network called EdgeGAN.
We have built a large-scale composite dataset called SketchyCOCO to support and evaluate the solution.
arXiv Detail & Related papers (2020-03-05T14:54:10Z) - Deep Self-Supervised Representation Learning for Free-Hand Sketch [51.101565480583304]
We tackle the problem of self-supervised representation learning for free-hand sketches.
Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs.
We show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods.
arXiv Detail & Related papers (2020-02-03T16:28:29Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.