LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language
Model as an Agent
- URL: http://arxiv.org/abs/2309.12311v1
- Date: Thu, 21 Sep 2023 17:59:45 GMT
- Title: LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language
Model as an Agent
- Authors: Jianing Yang, Xuweiyi Chen, Shengyi Qian, Nikhil Madaan, Madhavan
Iyengar, David F. Fouhey, Joyce Chai
- Abstract summary: 3D visual grounding is a critical skill for household robots, enabling them to navigate, manipulate objects, and answer questions based on their environment.
We propose LLM-Grounder, a novel zero-shot, open-vocabulary, Large Language Model (LLM)-based 3D visual grounding pipeline.
Our findings indicate that LLMs significantly improve the grounding capability, especially for complex language queries.
- Score: 23.134180979449823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D visual grounding is a critical skill for household robots, enabling them
to navigate, manipulate objects, and answer questions based on their
environment. While existing approaches often rely on extensive labeled data or
exhibit limitations in handling complex language queries, we propose
LLM-Grounder, a novel zero-shot, open-vocabulary, Large Language Model
(LLM)-based 3D visual grounding pipeline. LLM-Grounder utilizes an LLM to
decompose complex natural language queries into semantic constituents and
employs a visual grounding tool, such as OpenScene or LERF, to identify objects
in a 3D scene. The LLM then evaluates the spatial and commonsense relations
among the proposed objects to make a final grounding decision. Our method does
not require any labeled training data and can generalize to novel 3D scenes and
arbitrary text queries. We evaluate LLM-Grounder on the ScanRefer benchmark and
demonstrate state-of-the-art zero-shot grounding accuracy. Our findings
indicate that LLMs significantly improve the grounding capability, especially
for complex language queries, making LLM-Grounder an effective approach for 3D
vision-language tasks in robotics. Videos and interactive demos can be found on
the project website https://chat-with-nerf.github.io/ .
Related papers
- LLaMA-Mesh: Unifying 3D Mesh Generation with Language Models [62.85566496673856]
This work explores expanding the capabilities of large language models (LLMs) pretrained on text to generate 3D meshes within a unified model.
A primary challenge is effectively tokenizing 3D mesh data into discrete tokens that LLMs can process seamlessly.
Our work is the first to demonstrate that LLMs can be fine-tuned to acquire complex spatial knowledge for 3D mesh generation in a text-based format.
arXiv Detail & Related papers (2024-11-14T17:08:23Z) - VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding [57.04804711488706]
3D visual grounding is crucial for robots, requiring integration of natural language and 3D scene understanding.
We present VLM-Grounder, a novel framework using vision-language models (VLMs) for zero-shot 3D visual grounding based solely on 2D images.
arXiv Detail & Related papers (2024-10-17T17:59:55Z) - Towards Open-World Grasping with Large Vision-Language Models [5.317624228510749]
An open-world grasping system should be able to combine high-level contextual with low-level physical-geometric reasoning.
We propose OWG, an open-world grasping pipeline that combines vision-language models with segmentation and grasp synthesis models.
We conduct evaluation in cluttered indoor scene datasets to showcase OWG's robustness in grounding from open-ended language.
arXiv Detail & Related papers (2024-06-26T19:42:08Z) - VP-LLM: Text-Driven 3D Volume Completion with Large Language Models through Patchification [56.211321810408194]
Large language models (LLMs) have shown great potential in multi-modal understanding and generation tasks.
We present Volume Patch LLM (VP-LLM), which leverages LLMs to perform conditional 3D completion in a single-forward pass.
Our results demonstrate a strong ability of LLMs to interpret complex text instructions and understand 3D objects, surpassing state-of-the-art diffusion-based 3D completion models in generation quality.
arXiv Detail & Related papers (2024-06-08T18:17:09Z) - Grounded 3D-LLM with Referent Tokens [58.890058568493096]
We propose Grounded 3D-LLM to consolidate various 3D vision tasks within a unified generative framework.
The model uses scene referent tokens as special noun phrases to reference 3D scenes.
Per-task instruction-following templates are employed to ensure natural and diversity in translating 3D vision tasks into language formats.
arXiv Detail & Related papers (2024-05-16T18:03:41Z) - When LLMs step into the 3D World: A Survey and Meta-Analysis of 3D Tasks via Multi-modal Large Language Models [113.18524940863841]
This survey provides a comprehensive overview of the methodologies enabling large language models to process, understand, and generate 3D data.
Our investigation spans various 3D data representations, from point clouds to Neural Radiance Fields (NeRFs)
It examines their integration with LLMs for tasks such as 3D scene understanding, captioning, question-answering, and dialogue.
arXiv Detail & Related papers (2024-05-16T16:59:58Z) - LERF: Language Embedded Radiance Fields [35.925752853115476]
Language Embedded Radiance Fields (LERFs) is a method for grounding language embeddings from off-the-shelf models like CLIP into NeRF.
LERFs learns a dense, multi-scale language field inside NeRF by volume rendering CLIP embeddings along training rays.
After optimization, LERF can extract 3D relevancy maps for a broad range of language prompts interactively in real-time.
arXiv Detail & Related papers (2023-03-16T17:59:20Z) - Open-vocabulary Queryable Scene Representations for Real World Planning [56.175724306976505]
Large language models (LLMs) have unlocked new capabilities of task planning from human instructions.
However, prior attempts to apply LLMs to real-world robotic tasks are limited by the lack of grounding in the surrounding scene.
We develop NLMap, an open-vocabulary and queryable scene representation to address this problem.
arXiv Detail & Related papers (2022-09-20T17:29:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.