Multi-Agent Reinforcement Learning as a Computational Tool for Language
Evolution Research: Historical Context and Future Challenges
- URL: http://arxiv.org/abs/2002.08878v2
- Date: Tue, 27 Oct 2020 13:54:46 GMT
- Title: Multi-Agent Reinforcement Learning as a Computational Tool for Language
Evolution Research: Historical Context and Future Challenges
- Authors: Cl\'ement Moulin-Frier and Pierre-Yves Oudeyer
- Abstract summary: Computational models of emergent communication in agent populations are currently gaining interest in the machine learning community due to recent advances in Multi-Agent Reinforcement Learning (MARL)
The goal of this paper is to position recent MARL contributions within the historical context of language evolution research, as well as to extract from this theoretical and computational background a few challenges for future research.
- Score: 21.021451344428716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computational models of emergent communication in agent populations are
currently gaining interest in the machine learning community due to recent
advances in Multi-Agent Reinforcement Learning (MARL). Current contributions
are however still relatively disconnected from the earlier theoretical and
computational literature aiming at understanding how language might have
emerged from a prelinguistic substance. The goal of this paper is to position
recent MARL contributions within the historical context of language evolution
research, as well as to extract from this theoretical and computational
background a few challenges for future research.
Related papers
- Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - Traitement quantique des langues : {é}tat de l'art [0.0]
This article presents a review of quantum computing research works for Natural Language Processing (NLP)
Their goal is to improve the performance of current models, and to provide a better representation of several linguistic phenomena.
Several families of approaches are presented, including symbolic diagrammatic approaches, and hybrid neural networks.
arXiv Detail & Related papers (2024-04-09T08:05:15Z) - Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers [81.47046536073682]
We present a review and provide a unified perspective to summarize the recent progress as well as emerging trends in multilingual large language models (MLLMs) literature.
We hope our work can provide the community with quick access and spur breakthrough research in MLLMs.
arXiv Detail & Related papers (2024-04-07T11:52:44Z) - Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Natural Language Processing for Dialects of a Language: A Survey [56.93337350526933]
State-of-the-art natural language processing (NLP) models are trained on massive training corpora, and report a superlative performance on evaluation datasets.
This survey delves into an important attribute of these datasets: the dialect of a language.
Motivated by the performance degradation of NLP models for dialectic datasets and its implications for the equity of language technologies, we survey past research in NLP for dialects in terms of datasets, and approaches.
arXiv Detail & Related papers (2024-01-11T03:04:38Z) - A Survey of Large Language Models [81.06947636926638]
Language modeling has been widely studied for language understanding and generation in the past two decades.
Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora.
To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.
arXiv Detail & Related papers (2023-03-31T17:28:46Z) - ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational
Finance Question Answering [70.6359636116848]
We propose a new large-scale dataset, ConvFinQA, to study the chain of numerical reasoning in conversational question answering.
Our dataset poses great challenge in modeling long-range, complex numerical reasoning paths in real-world conversations.
arXiv Detail & Related papers (2022-10-07T23:48:50Z) - Identifying Causal Influences on Publication Trends and Behavior: A Case
Study of the Computational Linguistics Community [10.791197825505755]
We present mixed-method analyses to investigate causal influences of publication trends and behavior.
Key findings highlight the transition to rapidly emerging methodologies in the research community.
We anticipate this work to provide useful insights about publication trends and behavior.
arXiv Detail & Related papers (2021-10-15T08:36:13Z) - An Interpretable Graph-based Mapping of Trustworthy Machine Learning
Research [3.222802562733787]
We build a co-occurrence network of words using a web-scraped corpus of more than 7,000 peer-reviewed recent ML papers.
We use community detection to obtain semantic clusters of words in this network that can infer relative positions of TwML topics.
arXiv Detail & Related papers (2021-05-13T23:25:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.