CharNeRF: 3D Character Generation from Concept Art
- URL: http://arxiv.org/abs/2402.17115v1
- Date: Tue, 27 Feb 2024 01:22:08 GMT
- Title: CharNeRF: 3D Character Generation from Concept Art
- Authors: Eddy Chu, Yiyang Chen, Chedy Raissi, Anand Bhojan
- Abstract summary: We present a novel approach to create volumetric representations of 3D characters from consistent turnaround concept art.
We train the network to make use of these priors for various 3D points through a learnable view-direction-attended multi-head self-attention layer.
Our model is able to generate high-quality 360-degree views of characters.
- Score: 3.8061090528695543
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D modeling holds significant importance in the realms of AR/VR and gaming,
allowing for both artistic creativity and practical applications. However, the
process is often time-consuming and demands a high level of skill. In this
paper, we present a novel approach to create volumetric representations of 3D
characters from consistent turnaround concept art, which serves as the standard
input in the 3D modeling industry. While Neural Radiance Field (NeRF) has been
a game-changer in image-based 3D reconstruction, to the best of our knowledge,
there is no known research that optimizes the pipeline for concept art. To
harness the potential of concept art, with its defined body poses and specific
view angles, we propose encoding it as priors for our model. We train the
network to make use of these priors for various 3D points through a learnable
view-direction-attended multi-head self-attention layer. Additionally, we
demonstrate that a combination of ray sampling and surface sampling enhances
the inference capabilities of our network. Our model is able to generate
high-quality 360-degree views of characters. Subsequently, we provide a simple
guideline to better leverage our model to extract the 3D mesh. It is important
to note that our model's inferencing capabilities are influenced by the
training data's characteristics, primarily focusing on characters with a single
head, two arms, and two legs. Nevertheless, our methodology remains versatile
and adaptable to concept art from diverse subject matters, without imposing any
specific assumptions on the data.
Related papers
- Diffusion Models in 3D Vision: A Survey [11.116658321394755]
We review the state-of-the-art approaches that leverage diffusion models for 3D visual tasks.
These approaches include 3D object generation, shape completion, point cloud reconstruction, and scene understanding.
We discuss potential solutions, including improving computational efficiency, enhancing multimodal fusion, and exploring the use of large-scale pretraining.
arXiv Detail & Related papers (2024-10-07T04:12:23Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - Customize-It-3D: High-Quality 3D Creation from A Single Image Using
Subject-Specific Knowledge Prior [33.45375100074168]
We present a novel two-stage approach that fully utilizes the information provided by the reference image to establish a customized knowledge prior for image-to-3D generation.
Experiments showcase the superiority of our method, Customize-It-3D, outperforming previous works by a substantial margin.
arXiv Detail & Related papers (2023-12-15T19:07:51Z) - PonderV2: Pave the Way for 3D Foundation Model with A Universal
Pre-training Paradigm [114.47216525866435]
We introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation.
For the first time, PonderV2 achieves state-of-the-art performance on 11 indoor and outdoor benchmarks, implying its effectiveness.
arXiv Detail & Related papers (2023-10-12T17:59:57Z) - Breathing New Life into 3D Assets with Generative Repainting [74.80184575267106]
Diffusion-based text-to-image models ignited immense attention from the vision community, artists, and content creators.
Recent works have proposed various pipelines powered by the entanglement of diffusion models and neural fields.
We explore the power of pretrained 2D diffusion models and standard 3D neural radiance fields as independent, standalone tools.
Our pipeline accepts any legacy renderable geometry, such as textured or untextured meshes, and orchestrates the interaction between 2D generative refinement and 3D consistency enforcement tools.
arXiv Detail & Related papers (2023-09-15T16:34:51Z) - NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with
360{\deg} Views [77.93662205673297]
In this work, we study the challenging task of lifting a single image to a 3D object.
We demonstrate the ability to generate a plausible 3D object with 360deg views that correspond well with a given reference image.
We propose a novel framework, dubbed NeuralLift-360, that utilizes a depth-aware radiance representation.
arXiv Detail & Related papers (2022-11-29T17:59:06Z) - GET3D: A Generative Model of High Quality 3D Textured Shapes Learned
from Images [72.15855070133425]
We introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures.
GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings.
arXiv Detail & Related papers (2022-09-22T17:16:19Z) - 3DMM-RF: Convolutional Radiance Fields for 3D Face Modeling [111.98096975078158]
We introduce a style-based generative network that synthesizes in one pass all and only the required rendering samples of a neural radiance field.
We show that this model can accurately be fit to "in-the-wild" facial images of arbitrary pose and illumination, extract the facial characteristics, and be used to re-render the face in controllable conditions.
arXiv Detail & Related papers (2022-09-15T15:28:45Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.