Learning to Improve Representations by Communicating About Perspectives
- URL: http://arxiv.org/abs/2109.09390v1
- Date: Mon, 20 Sep 2021 09:30:13 GMT
- Title: Learning to Improve Representations by Communicating About Perspectives
- Authors: Julius Taylor, Eleni Nisioti, Cl\'ement Moulin-Frier
- Abstract summary: We present aminimal architecture comprised of a population of autoencoders.
We show that our proposed architectureallows the emergence of aligned representations.
Results demonstrate how communication from subjective perspec-tives can lead to the acquisition of more abstract representations in multi-agent systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Effective latent representations need to capture abstract features of the
externalworld. We hypothesise that the necessity for a group of agents to
reconcile theirsubjective interpretations of a shared environment state is an
essential factor in-fluencing this property. To test this hypothesis, we
propose an architecture whereindividual agents in a population receive
different observations of the same under-lying state and learn latent
representations that they communicate to each other. Wehighlight a fundamental
link between emergent communication and representationlearning: the role of
language as a cognitive tool and the opportunities conferredby subjectivity, an
inherent property of most multi-agent systems. We present aminimal architecture
comprised of a population of autoencoders, where we defineloss functions,
capturing different aspects of effective communication, and examinetheir effect
on the learned representations. We show that our proposed architectureallows
the emergence of aligned representations. The subjectivity introduced
bypresenting agents with distinct perspectives of the environment state
contributes tolearning abstract representations that outperform those learned
by both a single au-toencoder and a population of autoencoders, presented with
identical perspectives.Altogether, our results demonstrate how communication
from subjective perspec-tives can lead to the acquisition of more abstract
representations in multi-agentsystems, opening promising perspectives for
future research at the intersection ofrepresentation learning and emergent
communication.
Related papers
- Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - Neural Clustering based Visual Representation Learning [61.72646814537163]
Clustering is one of the most classic approaches in machine learning and data analysis.
We propose feature extraction with clustering (FEC), which views feature extraction as a process of selecting representatives from data.
FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives.
arXiv Detail & Related papers (2024-03-26T06:04:50Z) - Learning Geometric Representations of Objects via Interaction [25.383613570119266]
We address the problem of learning representations from observations of a scene involving an agent and an external object the agent interacts with.
We propose a representation learning framework extracting the location in physical space of both the agent and the object from unstructured observations of arbitrary nature.
arXiv Detail & Related papers (2023-09-11T09:45:22Z) - An Empirical Investigation of Representation Learning for Imitation [76.48784376425911]
Recent work in vision, reinforcement learning, and NLP has shown that auxiliary representation learning objectives can reduce the need for large amounts of expensive, task-specific data.
We propose a modular framework for constructing representation learning algorithms, then use our framework to evaluate the utility of representation learning for imitation.
arXiv Detail & Related papers (2022-05-16T11:23:42Z) - Investigating the Properties of Neural Network Representations in
Reinforcement Learning [35.02223992335008]
This paper empirically investigates the properties of representations that support transfer in reinforcement learning.
We consider Deep Q-learning agents with different auxiliary losses in a pixel-based navigation environment.
We develop a method to better understand why some representations work better for transfer, through a systematic approach.
arXiv Detail & Related papers (2022-03-30T00:14:26Z) - Visual Probing: Cognitive Framework for Explaining Self-Supervised Image
Representations [12.485001250777248]
Recently introduced self-supervised methods for image representation learning provide on par or superior results to their fully supervised competitors.
Motivated by this observation, we introduce a novel visual probing framework for explaining the self-supervised models.
We show the effectiveness and applicability of those analogs in the context of explaining self-supervised representations.
arXiv Detail & Related papers (2021-06-21T12:40:31Z) - Cross-Modal Discrete Representation Learning [73.68393416984618]
We present a self-supervised learning framework that learns a representation that captures finer levels of granularity across different modalities.
Our framework relies on a discretized embedding space created via vector quantization that is shared across different modalities.
arXiv Detail & Related papers (2021-06-10T00:23:33Z) - Exploring Visual Engagement Signals for Representation Learning [56.962033268934015]
We present VisE, a weakly supervised learning approach, which maps social images to pseudo labels derived by clustered engagement signals.
We then study how models trained in this way benefit subjective downstream computer vision tasks such as emotion recognition or political bias detection.
arXiv Detail & Related papers (2021-04-15T20:50:40Z) - Self-supervised Learning from a Multi-view Perspective [121.63655399591681]
We show that self-supervised representations can extract task-relevant information and discard task-irrelevant information.
Our theoretical framework paves the way to a larger space of self-supervised learning objective design.
arXiv Detail & Related papers (2020-06-10T00:21:35Z) - Towards Graph Representation Learning in Emergent Communication [37.8523331078468]
We use graph convolutional networks to support the evolution of language and cooperation in multi-agent systems.
Motivated by an image-based referential game, we propose a graph referential game with varying degrees of complexity.
We show that the emerged communication protocol is robust, that the agents uncover the true factors of variation in the game, and that they learn to generalize beyond the samples encountered during training.
arXiv Detail & Related papers (2020-01-24T15:55:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.