Learning to generate line drawings that convey geometry and semantics
- URL: http://arxiv.org/abs/2203.12691v1
- Date: Wed, 23 Mar 2022 19:27:41 GMT
- Title: Learning to generate line drawings that convey geometry and semantics
- Authors: Caroline Chan, Fredo Durand, Phillip Isola
- Abstract summary: This paper presents an unpaired method for creating line drawings from photographs.
We observe that line drawings are encodings of scene information and seek to convey 3D shape and semantic meaning.
We introduce a geometry loss which predicts depth information from the image features of a line drawing, and a semantic loss which matches the CLIP features of a line drawing with its corresponding photograph.
- Score: 22.932131011984513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents an unpaired method for creating line drawings from
photographs. Current methods often rely on high quality paired datasets to
generate line drawings. However, these datasets often have limitations due to
the subjects of the drawings belonging to a specific domain, or in the amount
of data collected. Although recent work in unsupervised image-to-image
translation has shown much progress, the latest methods still struggle to
generate compelling line drawings. We observe that line drawings are encodings
of scene information and seek to convey 3D shape and semantic meaning. We build
these observations into a set of objectives and train an image translation to
map photographs into line drawings. We introduce a geometry loss which predicts
depth information from the image features of a line drawing, and a semantic
loss which matches the CLIP features of a line drawing with its corresponding
photograph. Our approach outperforms state-of-the-art unpaired image
translation and line drawing generation methods on creating line drawings from
arbitrary photographs. For code and demo visit our webpage
carolineec.github.io/informative_drawings
Related papers
- Equipping Sketch Patches with Context-Aware Positional Encoding for Graphic Sketch Representation [4.961362040453441]
We propose a variant-drawing-protected method for learning graphic sketch representation.
Instead of injecting sketch drawings into graph edges, we embed these sequential information into graph nodes only.
Experimental results indicate that our method significantly improves sketch healing and controllable sketch synthesis.
arXiv Detail & Related papers (2024-03-26T09:26:12Z) - Deep Geometrized Cartoon Line Inbetweening [98.35956631655357]
Inbetweening involves generating intermediate frames between two black-and-white line drawings.
Existing frame methods that rely on matching and warping whole images are unsuitable for line inbetweening.
We propose AnimeInbet, which geometrizes geometric line drawings into endpoints and reframes the inbetweening task as a graph fusion problem.
Our method can effectively capture the sparsity and unique structure of line drawings while preserving the details during inbetweening.
arXiv Detail & Related papers (2023-09-28T17:50:05Z) - Picture that Sketch: Photorealistic Image Generation from Abstract
Sketches [109.69076457732632]
Given an abstract, deformed, ordinary sketch from untrained amateurs like you and me, this paper turns it into a photorealistic image.
We do not dictate an edgemap-like sketch to start with, but aim to work with abstract free-hand human sketches.
In doing so, we essentially democratise the sketch-to-photo pipeline, "picturing" a sketch regardless of how good you sketch.
arXiv Detail & Related papers (2023-03-20T14:49:03Z) - Quality Metric Guided Portrait Line Drawing Generation from Unpaired
Training Data [88.78171717494688]
We propose a novel method to automatically transform face photos to portrait drawings using unpaired training data.
Our method can (1) learn to generate high quality portrait drawings in multiple styles using a single network and (2) generate portrait drawings in a "new style" unseen in the training data.
arXiv Detail & Related papers (2022-02-08T06:49:57Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Cross-Modal Hierarchical Modelling for Fine-Grained Sketch Based Image
Retrieval [147.24102408745247]
We study a further trait of sketches that has been overlooked to date, that is, they are hierarchical in terms of the levels of detail.
In this paper, we design a novel network that is capable of cultivating sketch-specific hierarchies and exploiting them to match sketch with photo at corresponding hierarchical levels.
arXiv Detail & Related papers (2020-07-29T20:50:25Z) - Neural Contours: Learning to Draw Lines from 3D Shapes [20.650770317411233]
Our architecture incorporates a differentiable module operating on geometric features of the 3D model, and an image-based module operating on view-based shape representations.
At test time, geometric and view-based reasoning are combined with the help of a neural module to create a line drawing.
arXiv Detail & Related papers (2020-03-23T15:37:49Z) - SketchDesc: Learning Local Sketch Descriptors for Multi-view
Correspondence [68.63311821718416]
We study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object.
This problem is challenging since the visual features of corresponding points at different views can be very different.
We take a deep learning approach and learn a novel local sketch descriptor from data.
arXiv Detail & Related papers (2020-01-16T11:31:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.