ReferentialGym: A Nomenclature and Framework for Language Emergence &
Grounding in (Visual) Referential Games
- URL: http://arxiv.org/abs/2012.09486v1
- Date: Thu, 17 Dec 2020 10:22:15 GMT
- Title: ReferentialGym: A Nomenclature and Framework for Language Emergence &
Grounding in (Visual) Referential Games
- Authors: Kevin Denamgana\"i and James Alfred Walker
- Abstract summary: Natural languages are powerful tools wielded by human beings to communicate information and co-operate towards common goals.
computational linguists have been researching the emergence of in artificial languages induced by language games.
The AI community has started to investigate language emergence and grounding working towards better human-machine interfaces.
- Score: 0.30458514384586394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural languages are powerful tools wielded by human beings to communicate
information and co-operate towards common goals. Their values lie in some main
properties like compositionality, hierarchy and recurrent syntax, which
computational linguists have been researching the emergence of in artificial
languages induced by language games. Only relatively recently, the AI community
has started to investigate language emergence and grounding working towards
better human-machine interfaces. For instance, interactive/conversational AI
assistants that are able to relate their vision to the ongoing conversation.
This paper provides two contributions to this research field. Firstly, a
nomenclature is proposed to understand the main initiatives in studying
language emergence and grounding, accounting for the variations in assumptions
and constraints. Secondly, a PyTorch based deep learning framework is
introduced, entitled ReferentialGym, which is dedicated to furthering the
exploration of language emergence and grounding. By providing baseline
implementations of major algorithms and metrics, in addition to many different
features and approaches, ReferentialGym attempts to ease the entry barrier to
the field and provide the community with common implementations.
Related papers
- A Survey on Emergent Language [9.823821010022932]
The paper provides a comprehensive review of 181 scientific publications on emergent language in artificial intelligence.
Its objective is to serve as a reference for researchers interested in or proficient in the field.
arXiv Detail & Related papers (2024-09-04T12:22:05Z) - Symbolic Learning Enables Self-Evolving Agents [55.625275970720374]
We introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own.
Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning.
We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks.
arXiv Detail & Related papers (2024-06-26T17:59:18Z) - Weakly-supervised Deep Cognate Detection Framework for Low-Resourced
Languages Using Morphological Knowledge of Closely-Related Languages [1.7622337807395716]
Exploiting cognates for transfer learning in under-resourced languages is an exciting opportunity for language understanding tasks.
Previous approaches mainly focused on supervised cognate detection tasks based on orthographic, phonetic or state-of-the-art contextual language models.
This paper proposes a novel language-agnostic weakly-supervised deep cognate detection framework for under-resourced languages.
arXiv Detail & Related papers (2023-11-09T05:46:41Z) - Towards More Human-like AI Communication: A Review of Emergent
Communication Research [0.0]
Emergent communication (Emecom) is a field of research aiming to develop artificial agents capable of using natural language.
In this review, we delineate all the common proprieties we find across the literature and how they relate to human interactions.
We identify two subcategories and highlight their characteristics and open challenges.
arXiv Detail & Related papers (2023-08-01T14:43:10Z) - An Empirical Revisiting of Linguistic Knowledge Fusion in Language
Understanding Tasks [33.765874588342285]
Infusing language models with syntactic or semantic knowledge from structural linguistic priors has shown improvements on many language understanding tasks.
We conduct empirical study of replacing parsed graphs or trees with trivial ones for tasks in the GLUE benchmark.
It reveals that the gains might not be significantly attributed to explicit linguistic priors but rather to more feature interactions brought by fusion layers.
arXiv Detail & Related papers (2022-10-24T07:47:32Z) - Linking Emergent and Natural Languages via Corpus Transfer [98.98724497178247]
We propose a novel way to establish a link by corpus transfer between emergent languages and natural languages.
Our approach showcases non-trivial transfer benefits for two different tasks -- language modeling and image captioning.
We also introduce a novel metric to predict the transferability of an emergent language by translating emergent messages to natural language captions grounded on the same images.
arXiv Detail & Related papers (2022-03-24T21:24:54Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z) - Neural Abstructions: Abstractions that Support Construction for Grounded
Language Learning [69.1137074774244]
Leveraging language interactions effectively requires addressing limitations in the two most common approaches to language grounding.
We introduce the idea of neural abstructions: a set of constraints on the inference procedure of a label-conditioned generative model.
We show that with this method a user population is able to build a semantic modification for an open-ended house task in Minecraft.
arXiv Detail & Related papers (2021-07-20T07:01:15Z) - AM2iCo: Evaluating Word Meaning in Context across Low-ResourceLanguages
with Adversarial Examples [51.048234591165155]
We present AM2iCo, Adversarial and Multilingual Meaning in Context.
It aims to faithfully assess the ability of state-of-the-art (SotA) representation models to understand the identity of word meaning in cross-lingual contexts.
Results reveal that current SotA pretrained encoders substantially lag behind human performance.
arXiv Detail & Related papers (2021-04-17T20:23:45Z) - Bridging Linguistic Typology and Multilingual Machine Translation with
Multi-View Language Representations [83.27475281544868]
We use singular vector canonical correlation analysis to study what kind of information is induced from each source.
We observe that our representations embed typology and strengthen correlations with language relationships.
We then take advantage of our multi-view language vector space for multilingual machine translation, where we achieve competitive overall translation accuracy.
arXiv Detail & Related papers (2020-04-30T16:25:39Z) - A Practical Guide to Studying Emergent Communication through Grounded
Language Games [0.0]
This paper introduces a high-level robot interface that extends the Babel software system.
It presents for the first time a toolkit that provides flexible modules for dealing with each subtask involved in running advanced grounded language game experiments.
arXiv Detail & Related papers (2020-04-20T11:48:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.