Chat2Layout: Interactive 3D Furniture Layout with a Multimodal LLM
- URL: http://arxiv.org/abs/2407.21333v1
- Date: Wed, 31 Jul 2024 04:49:46 GMT
- Title: Chat2Layout: Interactive 3D Furniture Layout with a Multimodal LLM
- Authors: Can Wang, Hongliang Zhong, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao,
- Abstract summary: We introduce a novel interactive furniture layout generation system that extends the functionality of multimodal language models (MLLMs)
Within this framework, we present a novel training-free visual mechanism that assists MLLMs in reasoning about plausible layout plans.
Experimental results demonstrate that our approach facilitates language-interactive generation and arrangement for diverse and complex 3D furniture.
- Score: 37.640412098917636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic furniture layout is long desired for convenient interior design. Leveraging the remarkable visual reasoning capabilities of multimodal large language models (MLLMs), recent methods address layout generation in a static manner, lacking the feedback-driven refinement essential for interactive user engagement. We introduce Chat2Layout, a novel interactive furniture layout generation system that extends the functionality of MLLMs into the realm of interactive layout design. To achieve this, we establish a unified vision-question paradigm for in-context learning, enabling seamless communication with MLLMs to steer their behavior without altering model weights. Within this framework, we present a novel training-free visual prompting mechanism. This involves a visual-text prompting technique that assist MLLMs in reasoning about plausible layout plans, followed by an Offline-to-Online search (O2O-Search) method, which automatically identifies the minimal set of informative references to provide exemplars for visual-text prompting. By employing an agent system with MLLMs as the core controller, we enable bidirectional interaction. The agent not only comprehends the 3D environment and user requirements through linguistic and visual perception but also plans tasks and reasons about actions to generate and arrange furniture within the virtual space. Furthermore, the agent iteratively updates based on visual feedback from execution results. Experimental results demonstrate that our approach facilitates language-interactive generation and arrangement for diverse and complex 3D furniture.
Related papers
- LayoutVLM: Differentiable Optimization of 3D Layout via Vision-Language Models [57.92316645992816]
Open-universe 3D layout generation arranges unlabeled 3D assets conditioned on language instruction.
Large language models (LLMs) struggle with generating physically plausible 3D scenes and adherence to input instructions.
We introduce LayoutVLM, a framework and scene layout representation that exploits the semantic knowledge of Vision-Language Models.
arXiv Detail & Related papers (2024-12-03T06:15:04Z) - Visual Prompting in Multimodal Large Language Models: A Survey [95.75225825537528]
Multimodal large language models (MLLMs) equip pre-trained large-language models (LLMs) with visual capabilities.
Visual prompting has emerged for more fine-grained and free-form visual instructions.
This paper focuses on visual prompting, prompt generation, compositional reasoning, and prompt learning.
arXiv Detail & Related papers (2024-09-05T08:47:34Z) - Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge [76.45868419402265]
multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets.
However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs.
This paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models, into MLLMs.
arXiv Detail & Related papers (2024-07-05T17:43:30Z) - Ovis: Structural Embedding Alignment for Multimodal Large Language Model [41.32013722697081]
Ovis is a novel MLLM architecture designed to structurally align visual and textual embeddings.
Ovis integrates an additional learnable visual embedding table into the visual encoder's process.
Empirical evaluations on various multimodal benchmarks show that Ovis outperforms open-source MLLMs.
arXiv Detail & Related papers (2024-05-31T13:59:18Z) - Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want [58.091825321168514]
We introduce the Draw-and-Understand project: a new model, a multi-domain dataset, and a challenging benchmark for visual prompting.
Specifically, we propose a new end-to-end trained Multimodal Large Language Model (MLLM) that connects a vision encoder, a visual prompt encoder and an LLM.
To advance visual prompting research for MLLMs, we introduce MDVP-Data and MDVP-Bench.
arXiv Detail & Related papers (2024-03-29T16:26:20Z) - ChatSpot: Bootstrapping Multimodal LLMs via Precise Referring
Instruction Tuning [24.87615615489849]
We present precise referring instructions that utilize diverse reference representations such as points and boxes as referring prompts to refer to the special region.
We propose ChatSpot, a unified end-to-end multimodal large language model that supports diverse forms of interactivity including mouse clicks, drag-and-drop, and drawing boxes.
arXiv Detail & Related papers (2023-07-18T17:56:06Z) - Towards Language-guided Interactive 3D Generation: LLMs as Layout
Interpreter with Generative Feedback [20.151147653552155]
Large Language Models (LLMs) have demonstrated impressive reasoning, conversational, and zero-shot generation abilities.
We propose a novel language-guided interactive 3D generation system, dubbed LI3D, that integrates LLMs as a 3D layout interpreter.
Our system also incorporates LLaVA, a large language and vision assistant, to provide generative feedback from the visual aspect for improving the visual quality of generated content.
arXiv Detail & Related papers (2023-05-25T07:43:39Z) - MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action [96.33509740612486]
MM-REACT is a system paradigm that integrates ChatGPT with a pool of vision experts to achieve multimodal reasoning and action.
MM-REACT's prompt design allows language models to accept, associate, and process multimodal information.
arXiv Detail & Related papers (2023-03-20T18:31:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.