Neuro-Symbolic Causal Language Planning with Commonsense Prompting
- URL: http://arxiv.org/abs/2206.02928v1
- Date: Mon, 6 Jun 2022 22:09:52 GMT
- Title: Neuro-Symbolic Causal Language Planning with Commonsense Prompting
- Authors: Yujie Lu, Weixi Feng, Wanrong Zhu, Wenda Xu, Xin Eric Wang, Miguel
Eckstein, William Yang Wang
- Abstract summary: Language planning aims to implement complex high-level goals by decomposition into simpler low-level steps.
Previous methods require either manual exemplars or annotated programs to acquire such ability from large language models.
This paper proposes Neuro-Symbolic Causal Language Planner (CLAP) that elicits procedural knowledge from the LLMs with commonsense-infused prompting.
- Score: 67.06667162430118
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Language planning aims to implement complex high-level goals by decomposition
into sequential simpler low-level steps. Such procedural reasoning ability is
essential for applications such as household robots and virtual assistants.
Although language planning is a basic skill set for humans in daily life, it
remains a challenge for large language models (LLMs) that lack deep-level
commonsense knowledge in the real world. Previous methods require either manual
exemplars or annotated programs to acquire such ability from LLMs. In contrast,
this paper proposes Neuro-Symbolic Causal Language Planner (CLAP) that elicits
procedural knowledge from the LLMs with commonsense-infused prompting.
Pre-trained knowledge in LLMs is essentially an unobserved confounder that
causes spurious correlations between tasks and action plans. Through the lens
of a Structural Causal Model (SCM), we propose an effective strategy in CLAP to
construct prompts as a causal intervention toward our SCM. Using graph sampling
techniques and symbolic program executors, our strategy formalizes the
structured causal prompts from commonsense knowledge bases. CLAP obtains
state-of-the-art performance on WikiHow and RobotHow, achieving a relative
improvement of 5.28% in human evaluations under the counterfactual setting.
This indicates the superiority of CLAP in causal language planning semantically
and sequentially.
Related papers
- Language Agents Meet Causality -- Bridging LLMs and Causal World Models [50.79984529172807]
We propose a framework that integrates causal representation learning with large language models.
This framework learns a causal world model, with causal variables linked to natural language expressions.
We evaluate the framework on causal inference and planning tasks across temporal scales and environmental complexities.
arXiv Detail & Related papers (2024-10-25T18:36:37Z) - Scaling Up Natural Language Understanding for Multi-Robots Through the Lens of Hierarchy [8.180994118420053]
Long-horizon planning is hindered by challenges such as uncertainty accumulation, computational complexity, delayed rewards and incomplete information.
This work proposes an approach to exploit the task hierarchy from human instructions to facilitate multi-robot planning.
arXiv Detail & Related papers (2024-08-15T14:46:13Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - CLMASP: Coupling Large Language Models with Answer Set Programming for Robotic Task Planning [9.544073786800706]
Large Language Models (LLMs) possess extensive foundational knowledge and moderate reasoning abilities.
It is challenging to ground a LLM-generated plan to be executable for the specified robot with certain restrictions.
This paper introduces CLMASP, an approach that couples LLMs with Answer Set Programming (ASP) to overcome the limitations.
arXiv Detail & Related papers (2024-06-05T15:21:44Z) - Natural Language as Policies: Reasoning for Coordinate-Level Embodied Control with LLMs [7.746160514029531]
We demonstrate experimental results with LLMs that address robotics task planning problems.
Our approach acquires text descriptions of the task and scene objects, then formulates task planning through natural language reasoning.
Our approach is evaluated on a multi-modal prompt simulation benchmark.
arXiv Detail & Related papers (2024-03-20T17:58:12Z) - KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents [54.09074527006576]
Large Language Models (LLMs) have demonstrated great potential in complex reasoning tasks, yet they fall short when tackling more sophisticated challenges.
This inadequacy primarily stems from the lack of built-in action knowledge in language agents.
We introduce KnowAgent, a novel approach designed to enhance the planning capabilities of LLMs by incorporating explicit action knowledge.
arXiv Detail & Related papers (2024-03-05T16:39:12Z) - Learning adaptive planning representations with natural language
guidance [90.24449752926866]
This paper describes Ada, a framework for automatically constructing task-specific planning representations.
Ada interactively learns a library of planner-compatible high-level action abstractions and low-level controllers adapted to a particular domain of planning tasks.
arXiv Detail & Related papers (2023-12-13T23:35:31Z) - Conformal Temporal Logic Planning using Large Language Models [27.571083913525563]
We consider missions that require accomplishing multiple high-level sub-tasks expressed in natural language (NL), in a temporal and logical order.
Our goal is to design plans, defined as sequences of robot actions, accomplishing-NL tasks.
We propose HERACLEs, a hierarchical neuro-symbolic planner that relies on a novel integration of existing symbolic planners.
arXiv Detail & Related papers (2023-09-18T19:05:25Z) - Learning to Solve Voxel Building Embodied Tasks from Pixels and Natural
Language Instructions [53.21504989297547]
We propose a new method that combines a language model and reinforcement learning for the task of building objects in a Minecraft-like environment.
Our method first generates a set of consistently achievable sub-goals from the instructions and then completes associated sub-tasks with a pre-trained RL policy.
arXiv Detail & Related papers (2022-11-01T18:30:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.