NAP: Neural 3D Articulation Prior
- URL: http://arxiv.org/abs/2305.16315v1
- Date: Thu, 25 May 2023 17:59:35 GMT
- Title: NAP: Neural 3D Articulation Prior
- Authors: Jiahui Lei and Congyue Deng and Bokui Shen and Leonidas Guibas and
Kostas Daniilidis
- Abstract summary: We propose Neural 3D Articulation Prior (NAP), the first 3D deep generative model to synthesize 3D articulated object models.
To generate articulated objects, we first design a novel articulation tree/graph parameterization and then apply a diffusion-denoising probabilistic model over this representation.
In order to capture both the geometry and the motion structure whose distribution will affect each other, we design a graph-attention denoising network for learning the reverse diffusion process.
- Score: 31.875925637190328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose Neural 3D Articulation Prior (NAP), the first 3D deep generative
model to synthesize 3D articulated object models. Despite the extensive
research on generating 3D objects, compositions, or scenes, there remains a
lack of focus on capturing the distribution of articulated objects, a common
object category for human and robot interaction. To generate articulated
objects, we first design a novel articulation tree/graph parameterization and
then apply a diffusion-denoising probabilistic model over this representation
where articulated objects can be generated via denoising from random complete
graphs. In order to capture both the geometry and the motion structure whose
distribution will affect each other, we design a graph-attention denoising
network for learning the reverse diffusion process. We propose a novel distance
that adapts widely used 3D generation metrics to our novel task to evaluate
generation quality, and experiments demonstrate our high performance in
articulated object generation. We also demonstrate several conditioned
generation applications, including Part2Motion, PartNet-Imagination,
Motion2Part, and GAPart2Object.
Related papers
- Enhancing Single Image to 3D Generation using Gaussian Splatting and Hybrid Diffusion Priors [17.544733016978928]
3D object generation from a single image involves estimating the full 3D geometry and texture of unseen views from an unposed RGB image captured in the wild.
Recent advancements in 3D object generation have introduced techniques that reconstruct an object's 3D shape and texture.
We propose bridging the gap between 2D and 3D diffusion models to address this limitation.
arXiv Detail & Related papers (2024-10-12T10:14:11Z) - LEIA: Latent View-invariant Embeddings for Implicit 3D Articulation [32.27869897947267]
We introduce LEIA, a novel approach for representing dynamic 3D objects.
Our method involves observing the object at distinct time steps or "states" and conditioning a hypernetwork on the current state.
By interpolating between these states, we can generate novel articulation configurations in 3D space that were previously unseen.
arXiv Detail & Related papers (2024-09-10T17:59:53Z) - DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data [50.164670363633704]
We present DIRECT-3D, a diffusion-based 3D generative model for creating high-quality 3D assets from text prompts.
Our model is directly trained on extensive noisy and unaligned in-the-wild' 3D assets.
We achieve state-of-the-art performance in both single-class generation and text-to-3D generation.
arXiv Detail & Related papers (2024-06-06T17:58:15Z) - SUGAR: Pre-training 3D Visual Representations for Robotics [85.55534363501131]
We introduce a novel 3D pre-training framework for robotics named SUGAR.
SUGAR captures semantic, geometric and affordance properties of objects through 3D point clouds.
We show that SUGAR's 3D representation outperforms state-of-the-art 2D and 3D representations.
arXiv Detail & Related papers (2024-04-01T21:23:03Z) - HOIDiffusion: Generating Realistic 3D Hand-Object Interaction Data [42.49031063635004]
We propose HOIDiffusion for generating realistic and diverse 3D hand-object interaction data.
Our model is a conditional diffusion model that takes both the 3D hand-object geometric structure and text description as inputs for image synthesis.
We adopt the generated 3D data for learning 6D object pose estimation and show its effectiveness in improving perception systems.
arXiv Detail & Related papers (2024-03-18T17:48:31Z) - L3GO: Language Agents with Chain-of-3D-Thoughts for Generating
Unconventional Objects [53.4874127399702]
We propose a language agent with chain-of-3D-thoughts (L3GO), an inference-time approach that can reason about part-based 3D mesh generation.
We develop a new benchmark, Unconventionally Feasible Objects (UFO), as well as SimpleBlenv, a wrapper environment built on top of Blender.
Our approach surpasses the standard GPT-4 and other language agents for 3D mesh generation on ShapeNet.
arXiv Detail & Related papers (2024-02-14T09:51:05Z) - NOVUM: Neural Object Volumes for Robust Object Classification [22.411611823528272]
We show that explicitly integrating 3D compositional object representations into deep networks for image classification leads to a largely enhanced generalization in out-of-distribution scenarios.
In particular, we introduce a novel architecture, referred to as NOVUM, that consists of a feature extractor and a neural object volume for every target object class.
Our experiments show that NOVUM offers intriguing advantages over standard architectures due to the 3D compositional structure of the object representation.
arXiv Detail & Related papers (2023-05-24T03:20:09Z) - Explicit3D: Graph Network with Spatial Inference for Single Image 3D
Object Detection [35.85544715234846]
We propose a dynamic sparse graph pipeline named Explicit3D based on object geometry and semantics features.
Our experimental results on the SUN RGB-D dataset demonstrate that our Explicit3D achieves better performance balance than the-state-of-the-art.
arXiv Detail & Related papers (2023-02-13T16:19:54Z) - Disentangled3D: Learning a 3D Generative Model with Disentangled
Geometry and Appearance from Monocular Images [94.49117671450531]
State-of-the-art 3D generative models are GANs which use neural 3D volumetric representations for synthesis.
In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations.
arXiv Detail & Related papers (2022-03-29T22:03:18Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - RandomRooms: Unsupervised Pre-training from Synthetic Shapes and
Randomized Layouts for 3D Object Detection [138.2892824662943]
A promising solution is to make better use of the synthetic dataset, which consists of CAD object models, to boost the learning on real datasets.
Recent work on 3D pre-training exhibits failure when transfer features learned on synthetic objects to other real-world applications.
In this work, we put forward a new method called RandomRooms to accomplish this objective.
arXiv Detail & Related papers (2021-08-17T17:56:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.