MineAnyBuild: Benchmarking Spatial Planning for Open-world AI Agents
- URL: http://arxiv.org/abs/2505.20148v2
- Date: Tue, 27 May 2025 13:22:28 GMT
- Title: MineAnyBuild: Benchmarking Spatial Planning for Open-world AI Agents
- Authors: Ziming Wei, Bingqian Lin, Zijian Jiao, Yunshuang Nie, Liang Ma, Yuecheng Liu, Yuzheng Zhuang, Xiaodan Liang,
- Abstract summary: We build a benchmark called MineAnyBuild to evaluate the spatial planning ability of open-world AI agents in the Minecraft game.<n>MineAnyBuild requires an agent to generate executable architecture building plans based on the given multi-modal human instructions.<n>It involves 4,000 curated spatial planning tasks and also provides a paradigm for infinitely expandable data collection by utilizing rich player-generated content.
- Score: 43.585452978583135
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Spatial Planning is a crucial part in the field of spatial intelligence, which requires the understanding and planning about object arrangements in space perspective. AI agents with the spatial planning ability can better adapt to various real-world applications, including robotic manipulation, automatic assembly, urban planning etc. Recent works have attempted to construct benchmarks for evaluating the spatial intelligence of Multimodal Large Language Models (MLLMs). Nevertheless, these benchmarks primarily focus on spatial reasoning based on typical Visual Question-Answering (VQA) forms, which suffers from the gap between abstract spatial understanding and concrete task execution. In this work, we take a step further to build a comprehensive benchmark called MineAnyBuild, aiming to evaluate the spatial planning ability of open-world AI agents in the Minecraft game. Specifically, MineAnyBuild requires an agent to generate executable architecture building plans based on the given multi-modal human instructions. It involves 4,000 curated spatial planning tasks and also provides a paradigm for infinitely expandable data collection by utilizing rich player-generated content. MineAnyBuild evaluates spatial planning through four core supporting dimensions: spatial understanding, spatial reasoning, creativity, and spatial commonsense. Based on MineAnyBuild, we perform a comprehensive evaluation for existing MLLM-based agents, revealing the severe limitations but enormous potential in their spatial planning abilities. We believe our MineAnyBuild will open new avenues for the evaluation of spatial intelligence and help promote further development for open-world AI agents capable of spatial planning.
Related papers
- Warehouse Spatial Question Answering with LLM Agent [18.821295196340383]
We propose a LLM agent system with strong and advanced spatial reasoning ability.<n>Our system integrates multiple tools that allow the LLM agent to conduct spatial reasoning and API tools interaction.<n>Our system achieves high accuracy and efficiency in tasks such as object retrieval, counting, and distance estimation.
arXiv Detail & Related papers (2025-07-14T20:05:55Z) - SpatialScore: Towards Unified Evaluation for Multimodal Spatial Understanding [64.15606979785355]
Multimodal large language models (MLLMs) have achieved impressive success in question-answering tasks, yet their capabilities for spatial understanding are less explored.<n>This work investigates a critical question: do existing MLLMs possess 3D spatial perception and understanding abilities?
arXiv Detail & Related papers (2025-05-22T17:59:03Z) - SpatialLLM: From Multi-modality Data to Urban Spatial Intelligence [13.810192130250744]
The core of SpatialLLM lies in constructing detailed and structured scene descriptions from raw spatial data to prompt pre-trained LLMs for scene-based analysis.<n>Extensive experiments show that, with our designs, pretrained LLMs can accurately perceive spatial distribution information.<n>We argue that multi-field knowledge, context length, and reasoning ability are key factors influencing LLM performances in urban analysis.
arXiv Detail & Related papers (2025-05-19T04:53:41Z) - OmniGeo: Towards a Multimodal Large Language Models for Geospatial Artificial Intelligence [51.0456395687016]
multimodal large language models (LLMs) have opened new frontiers in artificial intelligence.<n>We propose a MLLM (OmniGeo) tailored to geospatial applications.<n>By combining the strengths of natural language understanding and spatial reasoning, our model enhances the ability of instruction following and the accuracy of GeoAI systems.
arXiv Detail & Related papers (2025-03-20T16:45:48Z) - CityEQA: A Hierarchical LLM Agent on Embodied Question Answering Benchmark in City Space [35.223263448229716]
Embodied Question Answering (EQA) has primarily focused on indoor environments.<n>We introduce CityEQA, a new task where an embodied agent answers open-vocabulary questions through active exploration in dynamic city spaces.<n>We present CityEQA-EC, the first benchmark dataset featuring 1,412 human-annotated tasks.<n>We also propose Planner-Manager-Actor (PMA), a novel agent tailored for CityEQA.
arXiv Detail & Related papers (2025-02-18T04:36:15Z) - EmbodiedCity: A Benchmark Platform for Embodied Agent in Real-world City Environment [38.14321677323052]
Embodied artificial intelligence emphasizes the role of an agent's body in generating human-like behaviors.
In this paper, we construct a benchmark platform for embodied intelligence evaluation in real-world city environments.
arXiv Detail & Related papers (2024-10-12T17:49:26Z) - TravelPlanner: A Benchmark for Real-World Planning with Language Agents [63.199454024966506]
We propose TravelPlanner, a new planning benchmark that focuses on travel planning, a common real-world planning scenario.
It provides a rich sandbox environment, various tools for accessing nearly four million data records, and 1,225 meticulously curated planning intents and reference plans.
Comprehensive evaluations show that the current language agents are not yet capable of handling such complex planning tasks-even GPT-4 only achieves a success rate of 0.6%.
arXiv Detail & Related papers (2024-02-02T18:39:51Z) - A systematic review of geospatial location embedding approaches in large
language models: A path to spatial AI systems [0.0]
Geospatial Location Embedding (GLE) helps a Large Language Model (LLM) assimilate and analyze spatial data.
GLEs signal the need for a Spatial Foundation/Language Model (SLM) that embeds spatial knowing within the model architecture.
arXiv Detail & Related papers (2024-01-12T12:43:33Z) - Embodied Task Planning with Large Language Models [86.63533340293361]
We propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning with physical scene constraint.
During inference, we discover the objects in the scene by extending open-vocabulary object detectors to multi-view RGB images collected in different achievable locations.
Experimental results show that the generated plan from our TaPA framework can achieve higher success rate than LLaVA and GPT-3.5 by a sizable margin.
arXiv Detail & Related papers (2023-07-04T17:58:25Z) - PackIt: A Virtual Environment for Geometric Planning [68.79816936618454]
PackIt is a virtual environment to evaluate and potentially learn the ability to do geometric planning.
We construct a set of challenging packing tasks using an evolutionary algorithm.
arXiv Detail & Related papers (2020-07-21T22:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.