LOA: Logical Optimal Actions for Text-based Interaction Games
- URL: http://arxiv.org/abs/2110.10973v1
- Date: Thu, 21 Oct 2021 08:36:11 GMT
- Title: LOA: Logical Optimal Actions for Text-based Interaction Games
- Authors: Daiki Kimura, Subhajit Chaudhury, Masaki Ono, Michiaki Tatsubori, Don
Joven Agravante, Asim Munawar, Akifumi Wachi, Ryosuke Kohita, Alexander Gray
- Abstract summary: We present Logical Optimal Actions (LOA), an action decision architecture of reinforcement learning applications.
LOA is a combination of neural network and symbolic knowledge acquisition approach for natural language interaction games.
- Score: 63.003353499732434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Logical Optimal Actions (LOA), an action decision architecture of
reinforcement learning applications with a neuro-symbolic framework which is a
combination of neural network and symbolic knowledge acquisition approach for
natural language interaction games. The demonstration for LOA experiments
consists of a web-based interactive platform for text-based games and
visualization for acquired knowledge for improving interpretability for trained
rules. This demonstration also provides a comparison module with other
neuro-symbolic approaches as well as non-symbolic state-of-the-art agent models
on the same text-based games. Our LOA also provides open-sourced implementation
in Python for the reinforcement learning environment to facilitate an
experiment for studying neuro-symbolic agents. Code: https://github.com/ibm/loa
Related papers
- Prompt2DeModel: Declarative Neuro-Symbolic Modeling with Natural Language [18.00674366843745]
This paper presents a pipeline for crafting domain knowledge for complex neuro-symbolic models through natural language prompts.
Our proposed pipeline utilizes techniques like dynamic in-context demonstration retrieval, model refinement based on feedback from a symbolic visualization, and user interaction.
This approach empowers domain experts, even those not well-versed in ML/AI, to formally declare their knowledge to be incorporated in customized neural models.
arXiv Detail & Related papers (2024-07-30T03:10:30Z) - Learning Symbolic Rules over Abstract Meaning Representations for
Textual Reinforcement Learning [63.148199057487226]
We propose a modular, NEuroSymbolic Textual Agent (NESTA) that combines a generic semantic generalization with a rule induction system to learn interpretable rules as policies.
Our experiments show that the proposed NESTA method outperforms deep reinforcement learning-based techniques by achieving better to unseen test games and learning from fewer training interactions.
arXiv Detail & Related papers (2023-07-05T23:21:05Z) - Emotion Recognition in Conversation using Probabilistic Soft Logic [17.62924003652853]
emotion recognition in conversation (ERC) is a sub-field of emotion recognition that focuses on conversations that contain two or more utterances.
We implement our approach in a framework called Probabilistic Soft Logic (PSL), a declarative templating language.
PSL provides functionality for the incorporation of results from neural models into PSL models.
We compare our method with state-of-the-art purely neural ERC systems, and see almost a 20% improvement.
arXiv Detail & Related papers (2022-07-14T23:59:06Z) - elBERto: Self-supervised Commonsense Learning for Question Answering [131.51059870970616]
We propose a Self-supervised Bidirectional Representation Learning of Commonsense framework, which is compatible with off-the-shelf QA model architectures.
The framework comprises five self-supervised tasks to force the model to fully exploit the additional training signals from contexts containing rich commonsense.
elBERto achieves substantial improvements on out-of-paragraph and no-effect questions where simple lexical similarity comparison does not help.
arXiv Detail & Related papers (2022-03-17T16:23:45Z) - Neural-Symbolic Integration for Interactive Learning and Conceptual
Grounding [1.14219428942199]
We propose neural-symbolic integration for abstract concept explanation and interactive learning.
Interaction with the user confirms or rejects a revision of the neural model.
The approach is illustrated using the Logic Network framework alongside Concept Activation Vectors and applied to a Conal Neural Network.
arXiv Detail & Related papers (2021-12-22T11:24:48Z) - Neuro-Symbolic Reinforcement Learning with First-Order Logic [63.003353499732434]
We propose a novel RL method for text-based games with a recent neuro-symbolic framework called Logical Neural Network.
Our experimental results show RL training with the proposed method converges significantly faster than other state-of-the-art neuro-symbolic methods in a TextWorld benchmark.
arXiv Detail & Related papers (2021-10-21T08:21:49Z) - Neuro-Symbolic Representations for Video Captioning: A Case for
Leveraging Inductive Biases for Vision and Language [148.0843278195794]
We propose a new model architecture for learning multi-modal neuro-symbolic representations for video captioning.
Our approach uses a dictionary learning-based method of learning relations between videos and their paired text descriptions.
arXiv Detail & Related papers (2020-11-18T20:21:19Z) - Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games [64.11746320061965]
We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
arXiv Detail & Related papers (2020-10-22T12:40:22Z) - Modeling Content and Context with Deep Relational Learning [31.854529627213275]
We present DRaiL, an open-source declarative framework for specifying deep relational models.
Our framework supports easy integration with expressive language encoders, and provides an interface to study the interactions between representation, inference and learning.
arXiv Detail & Related papers (2020-10-20T17:09:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.