A framework for the emergence and analysis of language in social
learning agents
- URL: http://arxiv.org/abs/2305.02632v1
- Date: Thu, 4 May 2023 08:11:01 GMT
- Title: A framework for the emergence and analysis of language in social
learning agents
- Authors: Tobias J. Wieczorek, Tatjana Tchumatchenko, Carlos Wert Carvajal and
Maximilian F. Eggl
- Abstract summary: This study proposes a communication protocol between cooperative agents to analyze the formation of individual and shared abstractions.
Using grid-world mazes and reinforcement learning, teacher ANNs pass a compressed message to a student ANN for better task completion.
This highlights the role of language as a common representation between agents and its implications on generalization capabilities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial neural networks (ANNs) are increasingly used as research models,
but questions remain about their generalizability and representational
invariance. Biological neural networks under social constraints evolved to
enable communicable representations, demonstrating generalization capabilities.
This study proposes a communication protocol between cooperative agents to
analyze the formation of individual and shared abstractions and their impact on
task performance. This communication protocol aims to mimic language features
by encoding high-dimensional information through low-dimensional
representation. Using grid-world mazes and reinforcement learning, teacher ANNs
pass a compressed message to a student ANN for better task completion. Through
this, the student achieves a higher goal-finding rate and generalizes the goal
location across task worlds. Further optimizing message content to maximize
student reward improves information encoding, suggesting that an accurate
representation in the space of messages requires bi-directional input. This
highlights the role of language as a common representation between agents and
its implications on generalization capabilities.
Related papers
- VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning [86.59849798539312]
We present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations.
We show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
arXiv Detail & Related papers (2024-10-30T16:11:05Z) - Bridging Local Details and Global Context in Text-Attributed Graphs [62.522550655068336]
GraphBridge is a framework that bridges local and global perspectives by leveraging contextual textual information.
Our method achieves state-of-theart performance, while our graph-aware token reduction module significantly enhances efficiency and solves scalability issues.
arXiv Detail & Related papers (2024-06-18T13:35:25Z) - Learning Multi-Agent Communication from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
Our proposed approach, CommFormer, efficiently optimize the communication graph and concurrently refines architectural parameters through gradient descent in an end-to-end manner.
arXiv Detail & Related papers (2024-05-14T12:40:25Z) - Efficient Communication via Self-supervised Information Aggregation for
Online and Offline Multi-agent Reinforcement Learning [12.334522644561591]
We argue that efficient message aggregation is essential for good coordination in cooperative Multi-Agent Reinforcement Learning (MARL)
We propose Multi-Agent communication via Self-supervised Information Aggregation (MASIA), where agents can aggregate the received messages into compact representations with high relevance to augment the local policy.
We build offline benchmarks for multi-agent communication, which is the first as we know.
arXiv Detail & Related papers (2023-02-19T16:02:16Z) - Learning Multi-Object Positional Relationships via Emergent
Communication [16.26264889682904]
We train agents in a referential game where observations contain two objects, and find that generalization is the major problem when the positional relationship is involved.
We find that the learned language can generalize well in a new multi-step MDP task where the positional relationship describes the goal, and performs better than raw-pixel images as well as pre-trained image features.
We also show that language transfer from the referential game performs better in the new task than learning language directly in this task, implying the potential benefits of pre-training in referential games.
arXiv Detail & Related papers (2023-02-16T04:44:53Z) - Less Data, More Knowledge: Building Next Generation Semantic
Communication Networks [180.82142885410238]
We present the first rigorous vision of a scalable end-to-end semantic communication network.
We first discuss how the design of semantic communication networks requires a move from data-driven networks towards knowledge-driven ones.
By using semantic representation and languages, we show that the traditional transmitter and receiver now become a teacher and apprentice.
arXiv Detail & Related papers (2022-11-25T19:03:25Z) - Compositional Generalization in Grounded Language Learning via Induced
Model Sparsity [81.38804205212425]
We consider simple language-conditioned navigation problems in a grid world environment with disentangled observations.
We design an agent that encourages sparse correlations between words in the instruction and attributes of objects, composing them together to find the goal.
Our agent maintains a high level of performance on goals containing novel combinations of properties even when learning from a handful of demonstrations.
arXiv Detail & Related papers (2022-07-06T08:46:27Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Networked Multi-Agent Reinforcement Learning with Emergent Communication [18.47483427884452]
Multi-Agent Reinforcement Learning (MARL) methods find optimal policies for agents that operate in the presence of other learning agents.
One way to coordinate is by learning to communicate with each other.
Can the agents develop a language while learning to perform a common task?
arXiv Detail & Related papers (2020-04-06T16:13:23Z) - Towards Graph Representation Learning in Emergent Communication [37.8523331078468]
We use graph convolutional networks to support the evolution of language and cooperation in multi-agent systems.
Motivated by an image-based referential game, we propose a graph referential game with varying degrees of complexity.
We show that the emerged communication protocol is robust, that the agents uncover the true factors of variation in the game, and that they learn to generalize beyond the samples encountered during training.
arXiv Detail & Related papers (2020-01-24T15:55:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.