Large language model empowered participatory urban planning
- URL: http://arxiv.org/abs/2402.01698v1
- Date: Wed, 24 Jan 2024 10:50:01 GMT
- Title: Large language model empowered participatory urban planning
- Authors: Zhilun Zhou, Yuming Lin, Yong Li
- Abstract summary: This research introduces an innovative urban planning approach integrating Large Language Models (LLMs) within the participatory process.
The framework, based on the crafted LLM agent, consists of role-play, collaborative generation, and feedback, solving a community-level land-use task catering to 1000 distinct interests.
- Score: 5.402147437950729
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Participatory urban planning is the mainstream of modern urban planning and
involves the active engagement of different stakeholders. However, the
traditional participatory paradigm encounters challenges in time and manpower,
while the generative planning tools fail to provide adjustable and inclusive
solutions. This research introduces an innovative urban planning approach
integrating Large Language Models (LLMs) within the participatory process. The
framework, based on the crafted LLM agent, consists of role-play, collaborative
generation, and feedback iteration, solving a community-level land-use task
catering to 1000 distinct interests. Empirical experiments in diverse urban
communities exhibit LLM's adaptability and effectiveness across varied planning
scenarios. The results were evaluated on four metrics, surpassing human experts
in satisfaction and inclusion, and rivaling state-of-the-art reinforcement
learning methods in service and ecology. Further analysis shows the advantage
of LLM agents in providing adjustable and inclusive solutions with natural
language reasoning and strong scalability. While implementing the recent
advancements in emulating human behavior for planning, this work envisions both
planners and citizens benefiting from low-cost, efficient LLM agents, which is
crucial for enhancing participation and realizing participatory urban planning.
Related papers
- Zero-shot Robotic Manipulation with Language-guided Instruction and Formal Task Planning [16.89900521727246]
We propose an innovative language-guided symbolic task planning (LM-SymOpt) framework with optimization.
It is the first expert-free planning framework since we combine the world knowledge from Large Language Models with formal reasoning.
Our experimental results show that LM-SymOpt outperforms existing LLM-based planning approaches.
arXiv Detail & Related papers (2025-01-25T13:33:22Z) - Planning, Living and Judging: A Multi-agent LLM-based Framework for Cyclical Urban Planning [5.9423583597394325]
Urban regeneration presents significant challenges within the context of urbanization.
We propose Cyclical Urban Planning (CUP), a new paradigm that continuously generates, evaluates, and refines urban plans in a closed-loop.
Experiments on the real-world dataset demonstrate the effectiveness of our framework as a continuous and adaptive planning process.
arXiv Detail & Related papers (2024-12-29T15:43:25Z) - EgoPlan-Bench2: A Benchmark for Multimodal Large Language Model Planning in Real-World Scenarios [53.26658545922884]
We introduce EgoPlan-Bench2, a benchmark designed to assess the planning capabilities of MLLMs across a wide range of real-world scenarios.
We evaluate 21 competitive MLLMs and provide an in-depth analysis of their limitations, revealing that they face significant challenges in real-world planning.
Our approach enhances the performance of GPT-4V by 10.24 on EgoPlan-Bench2 without additional training.
arXiv Detail & Related papers (2024-12-05T18:57:23Z) - Exploring and Benchmarking the Planning Capabilities of Large Language Models [57.23454975238014]
This work lays the foundations for improving planning capabilities of large language models (LLMs)
We construct a comprehensive benchmark suite encompassing both classical planning benchmarks and natural language scenarios.
We investigate the use of many-shot in-context learning to enhance LLM planning, exploring the relationship between increased context length and improved planning performance.
arXiv Detail & Related papers (2024-06-18T22:57:06Z) - Meta Reasoning for Large Language Models [58.87183757029041]
We introduce Meta-Reasoning Prompting (MRP), a novel and efficient system prompting method for large language models (LLMs)
MRP guides LLMs to dynamically select and apply different reasoning methods based on the specific requirements of each task.
We evaluate the effectiveness of MRP through comprehensive benchmarks.
arXiv Detail & Related papers (2024-06-17T16:14:11Z) - A Human-Like Reasoning Framework for Multi-Phases Planning Task with Large Language Models [15.874604623294427]
Multi-Phases planning problem involves multiple interconnected stages, such as outlining, information gathering, and planning.
Existing reasoning approaches have struggled to effectively address this complex task.
Our research aims to address this challenge by developing a human-like planning framework for LLM agents.
arXiv Detail & Related papers (2024-05-28T14:13:32Z) - Large Language Model for Participatory Urban Planning [22.245571438540512]
Large Language Models (LLMs) have shown considerable ability to simulate human-like agents.
We introduce an LLM-based multi-agent collaboration framework for participatory urban planning.
Our method achieves state-of-the-art performance in residents satisfaction and inclusion metrics.
arXiv Detail & Related papers (2024-02-27T02:47:50Z) - EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning [84.6451394629312]
We introduce EgoPlan-Bench, a benchmark to evaluate the planning abilities of MLLMs in real-world scenarios.
We show that EgoPlan-Bench poses significant challenges, highlighting a substantial scope for improvement in MLLMs to achieve human-level task planning.
We also present EgoPlan-IT, a specialized instruction-tuning dataset that effectively enhances model performance on EgoPlan-Bench.
arXiv Detail & Related papers (2023-12-11T03:35:58Z) - AI Agent as Urban Planner: Steering Stakeholder Dynamics in Urban
Planning via Consensus-based Multi-Agent Reinforcement Learning [8.363841553742912]
We introduce a Consensus-based Multi-Agent Reinforcement Learning framework for real-world land use readjustment.
This framework serves participatory urban planning, allowing diverse intelligent agents as stakeholder representatives to vote for preferred land use types.
By integrating Multi-Agent Reinforcement Learning, our framework ensures that participatory urban planning decisions are more dynamic and adaptive to evolving community needs.
arXiv Detail & Related papers (2023-10-25T17:04:11Z) - Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration [83.4031923134958]
Corex is a suite of novel general-purpose strategies that transform Large Language Models into autonomous agents.
Inspired by human behaviors, Corex is constituted by diverse collaboration paradigms including Debate, Review, and Retrieve modes.
We demonstrate that orchestrating multiple LLMs to work in concert yields substantially better performance compared to existing methods.
arXiv Detail & Related papers (2023-09-30T07:11:39Z) - AdaPlanner: Adaptive Planning from Feedback with Language Models [56.367020818139665]
Large language models (LLMs) have recently demonstrated the potential in acting as autonomous agents for sequential decision-making tasks.
We propose a closed-loop approach, AdaPlanner, which allows the LLM agent to refine its self-generated plan adaptively in response to environmental feedback.
To mitigate hallucination, we develop a code-style LLM prompt structure that facilitates plan generation across a variety of tasks, environments, and agent capabilities.
arXiv Detail & Related papers (2023-05-26T05:52:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.