Learning to Shadow Hand-drawn Sketches
- URL: http://arxiv.org/abs/2002.11812v2
- Date: Thu, 2 Apr 2020 23:12:21 GMT
- Title: Learning to Shadow Hand-drawn Sketches
- Authors: Qingyuan Zheng, Zhuoru Li and Adam Bargteil
- Abstract summary: We present a fully automatic method to generate detailed and accurate artistic shadows from pairs of line drawing sketches and lighting directions.
We contribute a new dataset of one thousand examples of pairs of line drawings and shadows that are tagged with lighting directions.
- Score: 5.929956715430167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a fully automatic method to generate detailed and accurate
artistic shadows from pairs of line drawing sketches and lighting directions.
We also contribute a new dataset of one thousand examples of pairs of line
drawings and shadows that are tagged with lighting directions. Remarkably, the
generated shadows quickly communicate the underlying 3D structure of the
sketched scene. Consequently, the shadows generated by our approach can be used
directly or as an excellent starting point for artists. We demonstrate that the
deep learning network we propose takes a hand-drawn sketch, builds a 3D model
in latent space, and renders the resulting shadows. The generated shadows
respect the hand-drawn lines and underlying 3D space and contain sophisticated
and accurate details, such as self-shadowing effects. Moreover, the generated
shadows contain artistic effects, such as rim lighting or halos appearing from
back lighting, that would be achievable with traditional 3D rendering methods.
Related papers
- Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation [55.73399465968594]
This paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description.
Three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss.
arXiv Detail & Related papers (2024-04-02T11:03:24Z) - Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes [118.406721663244]
We introduce a novel part-level modelling and alignment framework that facilitates abstraction modelling and cross-modal correspondence.
Our approach seamlessly extends to sketch modelling by establishing correspondence between CLIPasso edgemaps and projected 3D part regions.
arXiv Detail & Related papers (2023-12-07T05:04:33Z) - Neural 3D Strokes: Creating Stylized 3D Scenes with Vectorized 3D
Strokes [20.340259111585873]
We present Neural 3D Strokes, a novel technique to generate stylized images of a 3D scene at arbitrary novel views from multi-view 2D images.
Our approach draws inspiration from image-to-painting methods, simulating the progressive painting process of human artwork with vector strokes.
arXiv Detail & Related papers (2023-11-27T09:02:21Z) - Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation [51.64832538714455]
Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
arXiv Detail & Related papers (2023-02-14T06:28:42Z) - Controllable Shadow Generation Using Pixel Height Maps [58.59256060452418]
Physics-based shadow rendering methods require 3D geometries, which are not always available.
Deep learning-based shadow synthesis methods learn a mapping from the light information to an object's shadow without explicitly modeling the shadow geometry.
We introduce pixel heigh, a novel geometry representation that encodes the correlations between objects, ground, and camera pose.
arXiv Detail & Related papers (2022-07-12T08:29:51Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z) - Shadow Art Revisited: A Differentiable Rendering Based Approach [26.910401398827123]
We revisit shadow art using differentiable rendering based optimization frameworks to obtain the 3D sculpture.
Our choice of using differentiable rendering for generating shadow art sculptures can be attributed to its ability to learn the underlying 3D geometry solely from image data.
We demonstrate the generation of 3D sculptures to cast shadows of faces, animated movie characters, and applicability of the framework to sketch-based 3D reconstruction of underlying shapes.
arXiv Detail & Related papers (2021-07-30T10:43:48Z) - Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches [65.96417928860039]
We use an encoder/decoder architecture for the sketch to mesh translation.
We will show that this approach is easy to deploy, robust to style changes, and effective.
arXiv Detail & Related papers (2021-04-01T14:10:59Z) - SSN: Soft Shadow Network for Image Compositing [26.606890595862826]
We introduce an interactive Soft Shadow Network (SSN) to generates controllable soft shadows for image compositing.
SSN takes a 2D object mask as input and thus is agnostic to image types such as painting and vector art.
An environment light map is used to control the shadow's characteristics, such as angle and softness.
arXiv Detail & Related papers (2020-07-16T09:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.