Emergent Graphical Conventions in a Visual Communication Game
- URL: http://arxiv.org/abs/2111.14210v1
- Date: Sun, 28 Nov 2021 18:59:57 GMT
- Title: Emergent Graphical Conventions in a Visual Communication Game
- Authors: Shuwen Qiu, Sirui Xie, Lifeng Fan, Tao Gao, Song-Chun Zhu, Yixin Zhu
- Abstract summary: Humans communicate with graphical sketches apart from symbolic languages.
We take the very first step to model and simulate such an evolution process via two neural agents playing a visual communication game.
We devise a novel reinforcement learning method such that agents are evolved jointly towards successful communication and abstract graphical conventions.
- Score: 80.79297387339614
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans communicate with graphical sketches apart from symbolic languages.
While recent studies of emergent communication primarily focus on symbolic
languages, their settings overlook the graphical sketches existing in human
communication; they do not account for the evolution process through which
symbolic sign systems emerge in the trade-off between iconicity and
symbolicity. In this work, we take the very first step to model and simulate
such an evolution process via two neural agents playing a visual communication
game; the sender communicates with the receiver by sketching on a canvas. We
devise a novel reinforcement learning method such that agents are evolved
jointly towards successful communication and abstract graphical conventions. To
inspect the emerged conventions, we carefully define three key properties --
iconicity, symbolicity, and semanticity -- and design evaluation methods
accordingly. Our experimental results under different controls are consistent
with the observation in studies of human graphical conventions. Of note, we
find that evolved sketches can preserve the continuum of semantics under proper
environmental pressures. More interestingly, co-evolved agents can switch
between conventionalized and iconic communication based on their familiarity
with referents. We hope the present research can pave the path for studying
emergent communication with the unexplored modality of sketches.
Related papers
- EC^2: Emergent Communication for Embodied Control [72.99894347257268]
Embodied control requires agents to leverage multi-modal pre-training to quickly learn how to act in new environments.
We propose Emergent Communication for Embodied Control (EC2), a novel scheme to pre-train video-language representations for few-shot embodied control.
EC2 is shown to consistently outperform previous contrastive learning methods for both videos and texts as task inputs.
arXiv Detail & Related papers (2023-04-19T06:36:02Z) - Models of symbol emergence in communication: a conceptual review and a
guide for avoiding local minima [0.0]
Computational simulations are a popular method for testing hypotheses about the emergence of communication.
We identify the assumptions and explanatory targets of several most representative models and summarise the known results.
In line with this perspective, we sketch the road towards modelling the emergence of meaningful symbolic communication.
arXiv Detail & Related papers (2023-03-08T12:53:03Z) - Emergence of Shared Sensory-motor Graphical Language from Visual Input [22.23299485364174]
We introduce the Graphical Referential Game (GREG) where a speaker must produce a graphical utterance to name a visual referent object.
The utterances are drawing images produced using dynamical motor primitives combined with a sketching library.
We show that our method allows the emergence of a shared, graphical language with compositional properties.
arXiv Detail & Related papers (2022-10-03T17:11:18Z) - Iconary: A Pictionary-Based Game for Testing Multimodal Communication
with Drawings and Text [70.14613727284741]
Communicating with humans is challenging for AIs because it requires a shared understanding of the world, complex semantics, and at times multi-modal gestures.
We investigate these challenges in the context of Iconary, a collaborative game of drawing and guessing based on Pictionary.
We propose models to play Iconary and train them on over 55,000 games between human players.
arXiv Detail & Related papers (2021-12-01T19:41:03Z) - Visual resemblance and communicative context constrain the emergence of
graphical conventions [21.976382800327965]
Drawing provides a versatile medium for communicating about the visual world.
Do viewers understand drawings based solely on their ability to resemble the entities they refer to (i.e., as images)?
Do they understand drawings based on shared but arbitrary associations with these entities (i.e. as symbols)?
arXiv Detail & Related papers (2021-09-17T23:05:36Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Learning to Draw: Emergent Communication through Sketching [0.0]
We show how agents can learn to communicate in order to collaboratively solve tasks.
Existing research has focused on language, with a learned communication channel transmitting sequences of discrete tokens between the agents.
Our agents are parameterised by deep neural networks, and the drawing procedure is differentiable, allowing for end-to-end training.
In the framework of a referential communication game, we demonstrate that agents can not only successfully learn to communicate by drawing, but with appropriate inductive biases, can do so in a fashion that humans can interpret.
arXiv Detail & Related papers (2021-06-03T18:17:55Z) - Metaphor Generation with Conceptual Mappings [58.61307123799594]
We aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.
We propose to control the generation process by encoding conceptual mappings between cognitive domains.
We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems.
arXiv Detail & Related papers (2021-06-02T15:27:05Z) - Exploring Visual Engagement Signals for Representation Learning [56.962033268934015]
We present VisE, a weakly supervised learning approach, which maps social images to pseudo labels derived by clustered engagement signals.
We then study how models trained in this way benefit subjective downstream computer vision tasks such as emotion recognition or political bias detection.
arXiv Detail & Related papers (2021-04-15T20:50:40Z) - Towards Graph Representation Learning in Emergent Communication [37.8523331078468]
We use graph convolutional networks to support the evolution of language and cooperation in multi-agent systems.
Motivated by an image-based referential game, we propose a graph referential game with varying degrees of complexity.
We show that the emerged communication protocol is robust, that the agents uncover the true factors of variation in the game, and that they learn to generalize beyond the samples encountered during training.
arXiv Detail & Related papers (2020-01-24T15:55:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.