Text-Based Action-Model Acquisition for Planning
- URL: http://arxiv.org/abs/2202.08373v1
- Date: Tue, 15 Feb 2022 02:23:31 GMT
- Title: Text-Based Action-Model Acquisition for Planning
- Authors: Kebing Jin, Huaixun Chen, Hankz Hankui Zhuo
- Abstract summary: We propose a novel approach to learning action models from natural language texts by integrating Constraint Satisfaction and Natural Language Processing techniques.
Specifically, we first build a novel language model to extract plan traces from texts, and then build a set of constraints to generate action models based on the extracted plan traces.
- Score: 13.110360825201044
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Although there have been approaches that are capable of learning action
models from plan traces, there is no work on learning action models from
textual observations, which is pervasive and much easier to collect from
real-world applications compared to plan traces. In this paper we propose a
novel approach to learning action models from natural language texts by
integrating Constraint Satisfaction and Natural Language Processing techniques.
Specifically, we first build a novel language model to extract plan traces from
texts, and then build a set of constraints to generate action models based on
the extracted plan traces. After that, we iteratively improve the language
model and constraints until we achieve the convergent language model and action
models. We empirically exhibit that our approach is both effective and
efficient.
Related papers
- Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner [51.77263363285369]
We present an approach called Dialogue Action Tokens that adapts language model agents to plan goal-directed dialogues.
The core idea is to treat each utterance as an action, thereby converting dialogues into games where existing approaches such as reinforcement learning can be applied.
arXiv Detail & Related papers (2024-06-17T18:01:32Z) - Learning to Plan for Language Modeling from Unlabeled Data [23.042650737356496]
We train a module for planning the future writing process via a self-supervised learning objective.
Given the textual context, this planning module learns to predict future abstract writing actions, which correspond to centroids in a clustered text embedding space.
arXiv Detail & Related papers (2024-03-31T09:04:01Z) - PARADISE: Evaluating Implicit Planning Skills of Language Models with Procedural Warnings and Tips Dataset [0.0]
We present PARADISE, an abductive reasoning task using Q&A format on practical procedural text sourced from wikiHow.
It involves warning and tip inference tasks directly associated with goals, excluding intermediary steps, with the aim of testing the ability of the models to infer implicit knowledge of the plan solely from the given goal.
Our experiments, utilizing fine-tuned language models and zero-shot prompting, reveal the effectiveness of task-specific small models over large language models in most scenarios.
arXiv Detail & Related papers (2024-03-05T18:01:59Z) - Automated Action Model Acquisition from Narrative Texts [13.449750550301992]
We present NaRuto, a system that extracts structured events from narrative text and generates planning-language-style action models.
Experimental results in classical narrative planning domains show that NaRuto can generate action models of significantly better quality than existing fully automated methods.
arXiv Detail & Related papers (2023-07-17T07:04:31Z) - PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning [77.03847056008598]
PlaSma is a novel two-pronged approach to endow small language models with procedural knowledge and (constrained) language planning capabilities.
We develop symbolic procedural knowledge distillation to enhance the commonsense knowledge in small language models and an inference-time algorithm to facilitate more structured and accurate reasoning.
arXiv Detail & Related papers (2023-05-31T00:55:40Z) - Grounding Language Models to Images for Multimodal Inputs and Outputs [89.30027812161686]
We propose an efficient method to ground pretrained text-only language models to the visual domain.
We process arbitrarily interleaved image-and-text data, and generate text interleaved with retrieved images.
arXiv Detail & Related papers (2023-01-31T18:33:44Z) - Towards using Few-Shot Prompt Learning for Automating Model Completion [0.0]
We propose a simple yet a novel approach to improve completion in domain modeling activities.
Our approach exploits the power of large language models by using few-shot prompt learning without the need to train or fine-tune those models.
arXiv Detail & Related papers (2022-12-07T02:11:26Z) - MOCHA: A Multi-Task Training Approach for Coherent Text Generation from
Cognitive Perspective [22.69509556890676]
We propose a novel multi-task training strategy for coherent text generation grounded on the cognitive theory of writing.
We extensively evaluate our model on three open-ended generation tasks including story generation, news article writing and argument generation.
arXiv Detail & Related papers (2022-10-26T11:55:41Z) - Few-shot Prompting Towards Controllable Response Generation [49.479958672988566]
We first explored the combination of prompting and reinforcement learning (RL) to steer models' generation without accessing any of the models' parameters.
We apply multi-task learning to make the model learn to generalize to new tasks better.
Experiment results show that our proposed method can successfully control several state-of-the-art (SOTA) dialogue models without accessing their parameters.
arXiv Detail & Related papers (2022-06-08T14:48:06Z) - Few-shot Subgoal Planning with Language Models [58.11102061150875]
We show that language priors encoded in pre-trained language models allow us to infer fine-grained subgoal sequences.
In contrast to recent methods which make strong assumptions about subgoal supervision, our experiments show that language models can infer detailed subgoal sequences without any fine-tuning.
arXiv Detail & Related papers (2022-05-28T01:03:30Z) - Context-Aware Language Modeling for Goal-Oriented Dialogue Systems [84.65707332816353]
We formulate goal-oriented dialogue as a partially observed Markov decision process.
We derive a simple and effective method to finetune language models in a goal-aware way.
We evaluate our method on a practical flight-booking task using AirDialogue.
arXiv Detail & Related papers (2022-04-18T17:23:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.