CoRRPUS: Code-based Structured Prompting for Neurosymbolic Story
Understanding
- URL: http://arxiv.org/abs/2212.10754v3
- Date: Thu, 8 Jun 2023 11:58:21 GMT
- Title: CoRRPUS: Code-based Structured Prompting for Neurosymbolic Story
Understanding
- Authors: Yijiang River Dong, Lara J. Martin, Chris Callison-Burch
- Abstract summary: This work capitalizes on state-of-the-art Code-LLMs, such as Codex, to bootstrap the use of symbolic methods for tracking the state of stories.
We show that our CoRRPUS system and abstracted prompting procedures can beat current state-of-the-art structured LLM techniques.
- Score: 21.645075241532794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Story generation and understanding -- as with all NLG/NLU tasks -- has seen a
surge in neurosymbolic work. Researchers have recognized that, while large
language models (LLMs) have tremendous utility, they can be augmented with
symbolic means to be even better and to make up for any flaws that the neural
networks might have. However, symbolic methods are extremely costly in terms of
the amount of time and expertise needed to create them. In this work, we
capitalize on state-of-the-art Code-LLMs, such as Codex, to bootstrap the use
of symbolic methods for tracking the state of stories and aiding in story
understanding. We show that our CoRRPUS system and abstracted prompting
procedures can beat current state-of-the-art structured LLM techniques on
pre-existing story understanding tasks (bAbI Task 2 and Re^3) with minimal hand
engineering. We hope that this work can help highlight the importance of
symbolic representations and specialized prompting for LLMs as these models
require some guidance for performing reasoning tasks properly.
Related papers
- Neural-Symbolic Collaborative Distillation: Advancing Small Language Models for Complex Reasoning Tasks [30.572064185770298]
We propose a novel knowledge distillation method for learning the complex reasoning abilities of Large Language Models (LLMs)
NesyCD distills the general capabilities and specialized knowledge in LLMs using different manners.
Our experiments show that NesyCD significantly boosts SLMs' complex reasoning performance on in-domain (BBH, GSM8K) and out-of-domain (AGIEval, ARC) datasets.
arXiv Detail & Related papers (2024-09-20T04:17:13Z) - Neural Reward Machines [2.0755366440393743]
Non-markovian Reinforcement Learning (RL) tasks are very hard to solve, because agents must consider the entire history of state-action pairs to act rationally in the environment.
We define Neural Reward Machines (NRM), an automata-based neurosymbolic framework that can be used for both reasoning and learning in non-symbolic RL domains.
We show that NRMs can exploit high-level symbolic knowledge in non-symbolic environments without any knowledge of the SG function, outperforming Deep RL methods which cannot incorporate prior knowledge.
arXiv Detail & Related papers (2024-08-16T11:44:27Z) - Can Large Language Models Understand Symbolic Graphics Programs? [136.5639211254501]
Symbolic graphics programs are popular in computer graphics.
We create a benchmark for the semantic visual understanding of symbolic graphics programs.
We find that LLMs considered stronger at reasoning generally perform better.
arXiv Detail & Related papers (2024-08-15T17:59:57Z) - Neurosymbolic AI for Enhancing Instructability in Generative AI [7.4348066967005275]
Generative AI has transformed content creation across text, images, and music, showcasing capabilities in following instructions through prompting.
This article explores why neurosymbolic AI offers a better path to enhance the instructability of Large Language Models (LLMs)
We show that neurosymbolic approach enhances the reliability and context-awareness of task execution, enabling LLMs to dynamically interpret and respond to a wider range of instructional contexts with greater precision and flexibility.
arXiv Detail & Related papers (2024-07-26T13:15:50Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - Symbol-LLM: Leverage Language Models for Symbolic System in Visual Human
Activity Reasoning [58.5857133154749]
We propose a new symbolic system with broad-coverage symbols and rational rules.
We leverage the recent advancement of LLMs as an approximation of the two ideal properties.
Our method shows superiority in extensive activity understanding tasks.
arXiv Detail & Related papers (2023-11-29T05:27:14Z) - Symbolic Visual Reinforcement Learning: A Scalable Framework with
Object-Level Abstraction and Differentiable Expression Search [63.3745291252038]
We propose DiffSES, a novel symbolic learning approach that discovers discrete symbolic policies.
By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions.
Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more scalable than state-of-the-art symbolic RL methods.
arXiv Detail & Related papers (2022-12-30T17:50:54Z) - Learning Neuro-Symbolic Skills for Bilevel Planning [63.388694268198655]
Decision-making is challenging in robotics environments with continuous object-centric states, continuous actions, long horizons, and sparse feedback.
Hierarchical approaches, such as task and motion planning (TAMP), address these challenges by decomposing decision-making into two or more levels of abstraction.
Our main contribution is a method for learning parameterized polices in combination with operators and samplers.
arXiv Detail & Related papers (2022-06-21T19:01:19Z) - Neuro-Symbolic Causal Language Planning with Commonsense Prompting [67.06667162430118]
Language planning aims to implement complex high-level goals by decomposition into simpler low-level steps.
Previous methods require either manual exemplars or annotated programs to acquire such ability from large language models.
This paper proposes Neuro-Symbolic Causal Language Planner (CLAP) that elicits procedural knowledge from the LLMs with commonsense-infused prompting.
arXiv Detail & Related papers (2022-06-06T22:09:52Z) - Lifelong Reinforcement Learning with Temporal Logic Formulas and Reward
Machines [30.161550541362487]
We propose Lifelong reinforcement learning with Sequential linear temporal logic formulas and Reward Machines (LSRM)
We first introduce Sequential Linear Temporal Logic (SLTL), which is a supplement to the existing Linear Temporal Logic formal language.
We then utilize Reward Machines (RM) to exploit structural reward functions for tasks encoded with high-level events.
arXiv Detail & Related papers (2021-11-18T02:02:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.