3D VR Sketch Guided 3D Shape Prototyping and Exploration
- URL: http://arxiv.org/abs/2306.10830v6
- Date: Wed, 10 Jan 2024 08:55:22 GMT
- Title: 3D VR Sketch Guided 3D Shape Prototyping and Exploration
- Authors: Ling Luo, Pinaki Nath Chowdhury, Tao Xiang, Yi-Zhe Song, Yulia
Gryaditskaya
- Abstract summary: We propose a 3D shape generation network that takes a 3D VR sketch as a condition.
We assume that sketches are created by novices without art training.
Our method creates multiple 3D shapes that align with the original sketch's structure.
- Score: 108.6809158245037
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D shape modeling is labor-intensive, time-consuming, and requires years of
expertise. To facilitate 3D shape modeling, we propose a 3D shape generation
network that takes a 3D VR sketch as a condition. We assume that sketches are
created by novices without art training and aim to reconstruct geometrically
realistic 3D shapes of a given category. To handle potential sketch ambiguity,
our method creates multiple 3D shapes that align with the original sketch's
structure. We carefully design our method, training the model step-by-step and
leveraging multi-modal 3D shape representation to support training with limited
training data. To guarantee the realism of generated 3D shapes we leverage the
normalizing flow that models the distribution of the latent space of 3D shapes.
To encourage the fidelity of the generated 3D shapes to an input sketch, we
propose a dedicated loss that we deploy at different stages of the training
process. The code is available at https://github.com/Rowl1ng/3Dsketch2shape.
Related papers
- Diff3DS: Generating View-Consistent 3D Sketch via Differentiable Curve Rendering [17.918603435615335]
3D sketches are widely used for visually representing the 3D shape and structure of objects or scenes.
We propose Diff3DS, a novel differentiable framework for generating view-consistent 3D sketch.
Our framework bridges the domains of 3D sketch and customized image, achieving end-toend optimization of 3D sketch.
arXiv Detail & Related papers (2024-05-24T07:48:14Z) - NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes [118.406721663244]
We introduce a novel part-level modelling and alignment framework that facilitates abstraction modelling and cross-modal correspondence.
Our approach seamlessly extends to sketch modelling by establishing correspondence between CLIPasso edgemaps and projected 3D part regions.
arXiv Detail & Related papers (2023-12-07T05:04:33Z) - Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape Generation [13.47191379827792]
We investigate how large pre-trained models can be used to generate 3D shapes from sketches.
We find that conditioning a 3D generative model on the features of synthetic renderings during training enables us to effectively generate 3D shapes from sketches at inference time.
This suggests that the large pre-trained vision model features carry semantic signals that are resilient to domain shifts.
arXiv Detail & Related papers (2023-07-08T00:45:01Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z) - Neural Strokes: Stylized Line Drawing of 3D Shapes [36.88356061690497]
This paper introduces a model for producing stylized line drawings from 3D shapes.
The model takes a 3D shape and a viewpoint as input, and outputs a drawing with textured strokes.
arXiv Detail & Related papers (2021-10-08T05:40:57Z) - 3D Shape Reconstruction from Free-Hand Sketches [42.15888734492648]
Despite great progress achieved in 3D reconstruction from distortion-free line drawings, little effort has been made to reconstruct 3D shapes from free-hand sketches.
We aim to enhance the power of sketches in 3D-related applications such as interactive design and VR/AR games.
A major challenge for free-hand sketch 3D reconstruction comes from the insufficient training data and free-hand sketch diversity.
arXiv Detail & Related papers (2020-06-17T07:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.