Integrating AI Planning with Natural Language Processing: A Combination
of Explicit and Tacit Knowledge
- URL: http://arxiv.org/abs/2202.07138v2
- Date: Thu, 13 Apr 2023 07:05:22 GMT
- Title: Integrating AI Planning with Natural Language Processing: A Combination
of Explicit and Tacit Knowledge
- Authors: Kebing Jin, Hankz Hankui Zhuo
- Abstract summary: This paper outlines the commons and relations between AI planning and natural language processing.
It argues that each of them can effectively impact on the other one by five areas: (1) planning-based text understanding, (2) planning-based natural language processing, (3) planning-based explainability, (4) text-based human-robot interaction, and (5) applications.
- Score: 15.488154564562185
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Natural language processing (NLP) aims at investigating the interactions
between agents and humans, processing and analyzing large amounts of natural
language data. Large-scale language models play an important role in current
natural language processing. However, the challenges of explainability and
complexity come along with the developments of language models. One way is to
introduce logical relations and rules into natural language processing models,
such as making use of Automated Planning. Automated planning (AI planning)
focuses on building symbolic domain models and synthesizing plans to transit
initial states to goals based on domain models. Recently, there have been
plenty of works related to these two fields, which have the abilities to
generate explicit knowledge, e.g., preconditions and effects of action models,
and learn from tacit knowledge, e.g., neural models, respectively. Integrating
AI planning and natural language processing effectively improves the
communication between human and intelligent agents. This paper outlines the
commons and relations between AI planning and natural language processing,
argues that each of them can effectively impact on the other one by five areas:
(1) planning-based text understanding, (2) planning-based natural language
processing, (3) planning-based explainability, (4) text-based human-robot
interaction, and (5) applications. We also explore some potential future issues
between AI planning and natural language processing. To the best of our
knowledge, this survey is the first work that addresses the deep connections
between AI planning and Natural language processing.
Related papers
- Deep Learning and Machine Learning -- Natural Language Processing: From Theory to Application [17.367710635990083]
We focus on natural language processing (NLP) and the role of large language models (LLMs)
This paper discusses advanced data preprocessing techniques and the use of frameworks like Hugging Face for implementing transformer-based models.
It highlights challenges such as handling multilingual data, reducing bias, and ensuring model robustness.
arXiv Detail & Related papers (2024-10-30T09:35:35Z) - Learning Phonotactics from Linguistic Informants [54.086544221761486]
Our model iteratively selects or synthesizes a data-point according to one of a range of information-theoretic policies.
We find that the information-theoretic policies that our model uses to select items to query the informant achieve sample efficiency comparable to, or greater than, fully supervised approaches.
arXiv Detail & Related papers (2024-05-08T00:18:56Z) - Is English the New Programming Language? How About Pseudo-code Engineering? [0.0]
This study investigates how different input forms impact ChatGPT, a leading language model by OpenAI.
It examines the model's proficiency across four categories: understanding of intentions, interpretability, completeness, and creativity.
arXiv Detail & Related papers (2024-04-08T16:28:52Z) - Self Generated Wargame AI: Double Layer Agent Task Planning Based on
Large Language Model [0.6562256987706128]
This paper innovatively applies the large language model to the field of intelligent decision-making.
It proposes a two-layer agent task planning, issues and executes decision commands through the interaction of natural language.
It is found that the intelligent decision-making ability of the large language model is significantly stronger than the commonly used reinforcement learning AI and rule AI.
arXiv Detail & Related papers (2023-12-02T09:45:45Z) - PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning [77.03847056008598]
PlaSma is a novel two-pronged approach to endow small language models with procedural knowledge and (constrained) language planning capabilities.
We develop symbolic procedural knowledge distillation to enhance the commonsense knowledge in small language models and an inference-time algorithm to facilitate more structured and accurate reasoning.
arXiv Detail & Related papers (2023-05-31T00:55:40Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Collecting Interactive Multi-modal Datasets for Grounded Language
Understanding [66.30648042100123]
We formalized the collaborative embodied agent using natural language task.
We developed a tool for extensive and scalable data collection.
We collected the first dataset for interactive grounded language understanding.
arXiv Detail & Related papers (2022-11-12T02:36:32Z) - A Conversational Paradigm for Program Synthesis [110.94409515865867]
We propose a conversational program synthesis approach via large language models.
We train a family of large language models, called CodeGen, on natural language and programming language data.
Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm.
arXiv Detail & Related papers (2022-03-25T06:55:15Z) - Situated Language Learning via Interactive Narratives [16.67845396797253]
This paper explores the question of how to imbue learning agents with the ability to understand and generate contextually relevant natural language.
Two key components in creating such agents are interactivity and environment grounding.
We discuss the unique challenges a text games' puzzle-like structure combined with natural language state-and-action spaces provides.
arXiv Detail & Related papers (2021-03-18T01:55:16Z) - Unnatural Language Processing: Bridging the Gap Between Synthetic and
Natural Language Data [37.542036032277466]
We introduce a technique for -simulation-to-real'' transfer in language understanding problems.
Our approach matches or outperforms state-of-the-art models trained on natural language data in several domains.
arXiv Detail & Related papers (2020-04-28T16:41:00Z) - TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented
Dialogue [113.45485470103762]
In this work, we unify nine human-human and multi-turn task-oriented dialogue datasets for language modeling.
To better model dialogue behavior during pre-training, we incorporate user and system tokens into the masked language modeling.
arXiv Detail & Related papers (2020-04-15T04:09:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.