Neural Contours: Learning to Draw Lines from 3D Shapes
- URL: http://arxiv.org/abs/2003.10333v3
- Date: Sun, 5 Apr 2020 03:22:55 GMT
- Title: Neural Contours: Learning to Draw Lines from 3D Shapes
- Authors: Difan Liu, Mohamed Nabail, Aaron Hertzmann, Evangelos Kalogerakis
- Abstract summary: Our architecture incorporates a differentiable module operating on geometric features of the 3D model, and an image-based module operating on view-based shape representations.
At test time, geometric and view-based reasoning are combined with the help of a neural module to create a line drawing.
- Score: 20.650770317411233
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a method for learning to generate line drawings from 3D
models. Our architecture incorporates a differentiable module operating on
geometric features of the 3D model, and an image-based module operating on
view-based shape representations. At test time, geometric and view-based
reasoning are combined with the help of a neural module to create a line
drawing. The model is trained on a large number of crowdsourced comparisons of
line drawings. Experiments demonstrate that our method achieves significant
improvements in line drawing over the state-of-the-art when evaluated on
standard benchmarks, resulting in drawings that are comparable to those
produced by experienced human artists.
Related papers
- Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape Generation [13.47191379827792]
We investigate how large pre-trained models can be used to generate 3D shapes from sketches.
We find that conditioning a 3D generative model on the features of synthetic renderings during training enables us to effectively generate 3D shapes from sketches at inference time.
This suggests that the large pre-trained vision model features carry semantic signals that are resilient to domain shifts.
arXiv Detail & Related papers (2023-07-08T00:45:01Z) - 3D VR Sketch Guided 3D Shape Prototyping and Exploration [108.6809158245037]
We propose a 3D shape generation network that takes a 3D VR sketch as a condition.
We assume that sketches are created by novices without art training.
Our method creates multiple 3D shapes that align with the original sketch's structure.
arXiv Detail & Related papers (2023-06-19T10:27:24Z) - SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - Neural Strokes: Stylized Line Drawing of 3D Shapes [36.88356061690497]
This paper introduces a model for producing stylized line drawings from 3D shapes.
The model takes a 3D shape and a viewpoint as input, and outputs a drawing with textured strokes.
arXiv Detail & Related papers (2021-10-08T05:40:57Z) - SimpModeling: Sketching Implicit Field to Guide Mesh Modeling for 3D
Animalmorphic Head Design [40.821865912127635]
We propose SimpModeling, a novel sketch-based system for helping users, especially amateur users, easily model 3D animalmorphic heads.
We use the advanced implicit-based shape inference methods, which have strong ability to handle the domain gap between freehand sketches and synthetic ones used for training.
We also contribute to a dataset of high-quality 3D animal heads, which are manually created by artists.
arXiv Detail & Related papers (2021-08-05T12:17:36Z) - Joint Deep Multi-Graph Matching and 3D Geometry Learning from
Inhomogeneous 2D Image Collections [57.60094385551773]
We propose a trainable framework for learning a deformable 3D geometry model from inhomogeneous image collections.
We in addition obtain the underlying 3D geometry of the objects depicted in the 2D images.
arXiv Detail & Related papers (2021-03-31T17:25:36Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z) - Landmark Detection and 3D Face Reconstruction for Caricature using a
Nonlinear Parametric Model [27.553158595012974]
We propose the first automatic method for automatic landmark detection and 3D face reconstruction for caricature.
Based on the constructed dataset and the nonlinear parametric model, we propose a neural network based method to regress the 3D face shape and orientation from the input 2D caricature image.
arXiv Detail & Related papers (2020-04-20T10:34:52Z) - Modeling 3D Shapes by Reinforcement Learning [33.343268605720176]
We propose a two-step neural framework based on RL to learn 3D modeling policies.
To effectively train the modeling agents, we introduce a novel training algorithm that combines policy, imitation learning and reinforcement learning.
Our experiments show that the agents can learn good policies to produce regular and structure-aware mesh models.
arXiv Detail & Related papers (2020-03-27T13:05:39Z) - Self-Supervised 2D Image to 3D Shape Translation with Disentangled
Representations [92.89846887298852]
We present a framework to translate between 2D image views and 3D object shapes.
We propose SIST, a Self-supervised Image to Shape Translation framework.
arXiv Detail & Related papers (2020-03-22T22:44:02Z) - SketchDesc: Learning Local Sketch Descriptors for Multi-view
Correspondence [68.63311821718416]
We study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object.
This problem is challenging since the visual features of corresponding points at different views can be very different.
We take a deep learning approach and learn a novel local sketch descriptor from data.
arXiv Detail & Related papers (2020-01-16T11:31:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.