L3GO: Language Agents with Chain-of-3D-Thoughts for Generating
Unconventional Objects
- URL: http://arxiv.org/abs/2402.09052v1
- Date: Wed, 14 Feb 2024 09:51:05 GMT
- Title: L3GO: Language Agents with Chain-of-3D-Thoughts for Generating
Unconventional Objects
- Authors: Yutaro Yamada, Khyathi Chandu, Yuchen Lin, Jack Hessel, Ilker
Yildirim, Yejin Choi
- Abstract summary: We propose a language agent with chain-of-3D-thoughts (L3GO), an inference-time approach that can reason about part-based 3D mesh generation.
We develop a new benchmark, Unconventionally Feasible Objects (UFO), as well as SimpleBlenv, a wrapper environment built on top of Blender.
Our approach surpasses the standard GPT-4 and other language agents for 3D mesh generation on ShapeNet.
- Score: 53.4874127399702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion-based image generation models such as DALL-E 3 and Stable
Diffusion-XL demonstrate remarkable capabilities in generating images with
realistic and unique compositions. Yet, these models are not robust in
precisely reasoning about physical and spatial configurations of objects,
especially when instructed with unconventional, thereby out-of-distribution
descriptions, such as "a chair with five legs". In this paper, we propose a
language agent with chain-of-3D-thoughts (L3GO), an inference-time approach
that can reason about part-based 3D mesh generation of unconventional objects
that current data-driven diffusion models struggle with. More concretely, we
use large language models as agents to compose a desired object via
trial-and-error within the 3D simulation environment. To facilitate our
investigation, we develop a new benchmark, Unconventionally Feasible Objects
(UFO), as well as SimpleBlenv, a wrapper environment built on top of Blender
where language agents can build and compose atomic building blocks via API
calls. Human and automatic GPT-4V evaluations show that our approach surpasses
the standard GPT-4 and other language agents (e.g., ReAct and Reflexion) for 3D
mesh generation on ShapeNet. Moreover, when tested on our UFO benchmark, our
approach outperforms other state-of-the-art text-to-2D image and text-to-3D
models based on human evaluation.
Related papers
- ComboVerse: Compositional 3D Assets Creation Using Spatially-Aware Diffusion Guidance [76.7746870349809]
We present ComboVerse, a 3D generation framework that produces high-quality 3D assets with complex compositions by learning to combine multiple models.
Our proposed framework emphasizes spatial alignment of objects, compared with standard score distillation sampling.
arXiv Detail & Related papers (2024-03-19T03:39:43Z) - FMGS: Foundation Model Embedded 3D Gaussian Splatting for Holistic 3D Scene Understanding [11.118857208538039]
We present Foundation Model Embedded Gaussian Splatting (S), which incorporates vision-language embeddings of foundation models into 3D Gaussian Splatting (GS)
Results demonstrate remarkable multi-view semantic consistency, facilitating diverse downstream tasks, beating state-of-the-art methods by 10.2 percent on open-vocabulary language-based object detection.
This research explores the intersection of vision, language, and 3D scene representation, paving the way for enhanced scene understanding in uncontrolled real-world environments.
arXiv Detail & Related papers (2024-01-03T20:39:02Z) - CG3D: Compositional Generation for Text-to-3D via Gaussian Splatting [57.14748263512924]
CG3D is a method for compositionally generating scalable 3D assets.
Gamma radiance fields, parameterized to allow for compositions of objects, possess the capability to enable semantically and physically consistent scenes.
arXiv Detail & Related papers (2023-11-29T18:55:38Z) - AutoDecoding Latent 3D Diffusion Models [95.7279510847827]
We present a novel approach to the generation of static and articulated 3D assets that has a 3D autodecoder at its core.
The 3D autodecoder framework embeds properties learned from the target dataset in the latent space.
We then identify the appropriate intermediate volumetric latent space, and introduce robust normalization and de-normalization operations.
arXiv Detail & Related papers (2023-07-07T17:59:14Z) - NAP: Neural 3D Articulation Prior [31.875925637190328]
We propose Neural 3D Articulation Prior (NAP), the first 3D deep generative model to synthesize 3D articulated object models.
To generate articulated objects, we first design a novel articulation tree/graph parameterization and then apply a diffusion-denoising probabilistic model over this representation.
In order to capture both the geometry and the motion structure whose distribution will affect each other, we design a graph-attention denoising network for learning the reverse diffusion process.
arXiv Detail & Related papers (2023-05-25T17:59:35Z) - Anything-3D: Towards Single-view Anything Reconstruction in the Wild [61.090129285205805]
We introduce Anything-3D, a methodical framework that ingeniously combines a series of visual-language models and the Segment-Anything object segmentation model.
Our approach employs a BLIP model to generate textural descriptions, utilize the Segment-Anything model for the effective extraction of objects of interest, and leverages a text-to-image diffusion model to lift object into a neural radiance field.
arXiv Detail & Related papers (2023-04-19T16:39:51Z) - LanguageRefer: Spatial-Language Model for 3D Visual Grounding [72.7618059299306]
We develop a spatial-language model for a 3D visual grounding problem.
We show that our model performs competitively on visio-linguistic datasets proposed by ReferIt3D.
arXiv Detail & Related papers (2021-07-07T18:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.