Emergent Communication for Rules Reasoning
- URL: http://arxiv.org/abs/2311.04474v1
- Date: Wed, 8 Nov 2023 05:57:39 GMT
- Title: Emergent Communication for Rules Reasoning
- Authors: Yuxuan Guo, Yifan Hao, Rui Zhang, Enshuai Zhou, Zidong Du, Xishan
Zhang, Xinkai Song, Yuanbo Wen, Yongwei Zhao, Xuehai Zhou, Jiaming Guo, Qi
Yi, Shaohui Peng, Di Huang, Ruizhi Chen, Qi Guo, Yunji Chen
- Abstract summary: We propose the Reasoning Game, a cognition-oriented environment that encourages agents to reason and communicate high-level rules.
Experimental results show that, in the Reasoning Game, a semantically stable and compositional language emerges to solve reasoning problems.
- Score: 38.24159397787027
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Research on emergent communication between deep-learning-based agents has
received extensive attention due to its inspiration for linguistics and
artificial intelligence. However, previous attempts have hovered around
emerging communication under perception-oriented environmental settings, that
forces agents to describe low-level perceptual features intra image or symbol
contexts. In this work, inspired by the classic human reasoning test (namely
Raven's Progressive Matrix), we propose the Reasoning Game, a
cognition-oriented environment that encourages agents to reason and communicate
high-level rules, rather than perceived low-level contexts. Moreover, we
propose 1) an unbiased dataset (namely rule-RAVEN) as a benchmark to avoid
overfitting, 2) and a two-stage curriculum agent training method as a baseline
for more stable convergence in the Reasoning Game, where contexts and semantics
are bilaterally drifting. Experimental results show that, in the Reasoning
Game, a semantically stable and compositional language emerges to solve
reasoning problems. The emerged language helps agents apply the extracted rules
to the generalization of unseen context attributes, and to the transfer between
different context attributes or even tasks.
Related papers
- Igniting Language Intelligence: The Hitchhiker's Guide From
Chain-of-Thought Reasoning to Language Agents [80.5213198675411]
Large language models (LLMs) have dramatically enhanced the field of language intelligence.
LLMs leverage the intriguing chain-of-thought (CoT) reasoning techniques, obliging them to formulate intermediate steps en route to deriving an answer.
Recent research endeavors have extended CoT reasoning methodologies to nurture the development of autonomous language agents.
arXiv Detail & Related papers (2023-11-20T14:30:55Z) - Learning Symbolic Rules over Abstract Meaning Representations for
Textual Reinforcement Learning [63.148199057487226]
We propose a modular, NEuroSymbolic Textual Agent (NESTA) that combines a generic semantic generalization with a rule induction system to learn interpretable rules as policies.
Our experiments show that the proposed NESTA method outperforms deep reinforcement learning-based techniques by achieving better to unseen test games and learning from fewer training interactions.
arXiv Detail & Related papers (2023-07-05T23:21:05Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Visual Referential Games Further the Emergence of Disentangled
Representations [0.12891210250935145]
This paper investigates how do compositionality at the level of emerging languages, disentanglement at the level of the learned representations, and systematicity relate to each other in the context of visual referential games.
arXiv Detail & Related papers (2023-04-27T20:00:51Z) - Compositional Generalization in Grounded Language Learning via Induced
Model Sparsity [81.38804205212425]
We consider simple language-conditioned navigation problems in a grid world environment with disentangled observations.
We design an agent that encourages sparse correlations between words in the instruction and attributes of objects, composing them together to find the goal.
Our agent maintains a high level of performance on goals containing novel combinations of properties even when learning from a handful of demonstrations.
arXiv Detail & Related papers (2022-07-06T08:46:27Z) - Keywords and Instances: A Hierarchical Contrastive Learning Framework
Unifying Hybrid Granularities for Text Generation [59.01297461453444]
We propose a hierarchical contrastive learning mechanism, which can unify hybrid granularities semantic meaning in the input text.
Experiments demonstrate that our model outperforms competitive baselines on paraphrasing, dialogue generation, and storytelling tasks.
arXiv Detail & Related papers (2022-05-26T13:26:03Z) - Inherently Explainable Reinforcement Learning in Natural Language [14.117921448623342]
We focus on the task of creating a reinforcement learning agent that is inherently explainable.
This Hierarchically Explainable Reinforcement Learning agent operates in Interactive Fictions, text-based game environments.
Our agent is designed to treat explainability as a first-class citizen.
arXiv Detail & Related papers (2021-12-16T14:24:35Z) - Incorporating Pragmatic Reasoning Communication into Emergent Language [38.134221799334426]
We study the dynamics of linguistic communication along substantially different intelligence and intelligence levels.
We propose computational models that combine short-term mutual reasoning-based pragmatics with long-term language emergentism.
Our results shed light on their importance for making inroads towards getting more natural, accurate, robust, fine-grained, and succinct utterances.
arXiv Detail & Related papers (2020-06-07T10:31:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.