PartNeRF: Generating Part-Aware Editable 3D Shapes without 3D
Supervision
- URL: http://arxiv.org/abs/2303.09554v3
- Date: Tue, 21 Mar 2023 16:09:25 GMT
- Title: PartNeRF: Generating Part-Aware Editable 3D Shapes without 3D
Supervision
- Authors: Konstantinos Tertikas and Despoina Paschalidou and Boxiao Pan and
Jeong Joon Park and Mikaela Angelina Uy and Ioannis Emiris and Yannis
Avrithis and Leonidas Guibas
- Abstract summary: PartNeRF is a part-aware generative model for editable 3D shape synthesis that does not require explicit 3D supervision.
Our model generates objects as a set of locally defined NeRFs, augmented with an affine transformation.
This enables several editing operations such as applying transformations on parts, mixing parts from different objects etc.
- Score: 21.186624546494993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Impressive progress in generative models and implicit representations gave
rise to methods that can generate 3D shapes of high quality. However, being
able to locally control and edit shapes is another essential property that can
unlock several content creation applications. Local control can be achieved
with part-aware models, but existing methods require 3D supervision and cannot
produce textures. In this work, we devise PartNeRF, a novel part-aware
generative model for editable 3D shape synthesis that does not require any
explicit 3D supervision. Our model generates objects as a set of locally
defined NeRFs, augmented with an affine transformation. This enables several
editing operations such as applying transformations on parts, mixing parts from
different objects etc. To ensure distinct, manipulable parts we enforce a hard
assignment of rays to parts that makes sure that the color of each ray is only
determined by a single NeRF. As a result, altering one part does not affect the
appearance of the others. Evaluations on various ShapeNet categories
demonstrate the ability of our model to generate editable 3D objects of
improved fidelity, compared to previous part-based generative approaches that
require 3D supervision or models relying on NeRFs.
Related papers
- Instructive3D: Editing Large Reconstruction Models with Text Instructions [2.9575146209034853]
Instructive3D is a novel LRM based model that integrates generation and fine-grained editing, through user text prompts, of 3D objects into a single model.
We find that Instructive3D produces superior 3D objects with the properties specified by the edit prompts.
arXiv Detail & Related papers (2025-01-08T09:28:25Z) - Generating Editable Head Avatars with 3D Gaussian GANs [57.51487984425395]
Traditional 3D-aware generative adversarial networks (GANs) achieve photorealistic and view-consistent 3D head synthesis.
We propose a novel approach that enhances the editability and animation control of 3D head avatars by incorporating 3D Gaussian Splatting (3DGS) as an explicit 3D representation.
Our approach delivers high-quality 3D-aware synthesis with state-of-the-art controllability.
arXiv Detail & Related papers (2024-12-26T10:10:03Z) - PartGen: Part-level 3D Generation and Reconstruction with Multi-View Diffusion Models [63.1432721793683]
We introduce PartGen, a novel approach that generates 3D objects composed of meaningful parts starting from text, an image, or an unstructured 3D object.
We evaluate our method on generated and real 3D assets and show that it outperforms segmentation and part-extraction baselines by a large margin.
arXiv Detail & Related papers (2024-12-24T18:59:43Z) - Manipulating Vehicle 3D Shapes through Latent Space Editing [0.0]
This paper introduces a framework that employs a pre-trained regressor, enabling continuous, precise, attribute-specific modifications to vehicle 3D models.
Our method not only preserves the inherent identity of vehicle 3D objects, but also supports multi-attribute editing, allowing for extensive customization without compromising the model's structural integrity.
arXiv Detail & Related papers (2024-10-31T13:41:16Z) - PASTA: Controllable Part-Aware Shape Generation with Autoregressive Transformers [5.7181794813117754]
PASTA is an autoregressive transformer architecture for generating high quality 3D shapes.
Our model generates 3D shapes that are both more realistic and diverse than existing part-based and non part-based methods.
arXiv Detail & Related papers (2024-07-18T16:52:45Z) - Coin3D: Controllable and Interactive 3D Assets Generation with Proxy-Guided Conditioning [52.81032340916171]
Coin3D allows users to control the 3D generation using a coarse geometry proxy assembled from basic shapes.
Our method achieves superior controllability and flexibility in the 3D assets generation task.
arXiv Detail & Related papers (2024-05-13T17:56:13Z) - MoDA: Modeling Deformable 3D Objects from Casual Videos [84.29654142118018]
We propose neural dual quaternion blend skinning (NeuDBS) to achieve 3D point deformation without skin-collapsing artifacts.
In the endeavor to register 2D pixels across different frames, we establish a correspondence between canonical feature embeddings that encodes 3D points within the canonical space.
Our approach can reconstruct 3D models for humans and animals with better qualitative and quantitative performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-17T13:49:04Z) - ONeRF: Unsupervised 3D Object Segmentation from Multiple Views [59.445957699136564]
ONeRF is a method that automatically segments and reconstructs object instances in 3D from multi-view RGB images without any additional manual annotations.
The segmented 3D objects are represented using separate Neural Radiance Fields (NeRFs) which allow for various 3D scene editing and novel view rendering.
arXiv Detail & Related papers (2022-11-22T06:19:37Z) - Cross-Modal 3D Shape Generation and Manipulation [62.50628361920725]
We propose a generic multi-modal generative model that couples the 2D modalities and implicit 3D representations through shared latent spaces.
We evaluate our framework on two representative 2D modalities of grayscale line sketches and rendered color images.
arXiv Detail & Related papers (2022-07-24T19:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.