Multi-Agent Reinforcement Learning as a Computational Tool for Language
Evolution Research: Historical Context and Future Challenges
- URL: http://arxiv.org/abs/2002.08878v2
- Date: Tue, 27 Oct 2020 13:54:46 GMT
- Title: Multi-Agent Reinforcement Learning as a Computational Tool for Language
Evolution Research: Historical Context and Future Challenges
- Authors: Cl\'ement Moulin-Frier and Pierre-Yves Oudeyer
- Abstract summary: Computational models of emergent communication in agent populations are currently gaining interest in the machine learning community due to recent advances in Multi-Agent Reinforcement Learning (MARL)
The goal of this paper is to position recent MARL contributions within the historical context of language evolution research, as well as to extract from this theoretical and computational background a few challenges for future research.
- Score: 21.021451344428716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computational models of emergent communication in agent populations are
currently gaining interest in the machine learning community due to recent
advances in Multi-Agent Reinforcement Learning (MARL). Current contributions
are however still relatively disconnected from the earlier theoretical and
computational literature aiming at understanding how language might have
emerged from a prelinguistic substance. The goal of this paper is to position
recent MARL contributions within the historical context of language evolution
research, as well as to extract from this theoretical and computational
background a few challenges for future research.
Related papers
- From Babbling to Fluency: Evaluating the Evolution of Language Models in Terms of Human Language Acquisition [6.617999710257379]
We propose a three-stage framework to assess the abilities of LMs.
We evaluate the generative capacities of LMs using methods from linguistic research.
arXiv Detail & Related papers (2024-10-17T06:31:49Z) - From Linguistic Giants to Sensory Maestros: A Survey on Cross-Modal Reasoning with Large Language Models [56.9134620424985]
Cross-modal reasoning (CMR) is increasingly recognized as a crucial capability in the progression toward more sophisticated artificial intelligence systems.
The recent trend of deploying Large Language Models (LLMs) to tackle CMR tasks has marked a new mainstream of approaches for enhancing their effectiveness.
This survey offers a nuanced exposition of current methodologies applied in CMR using LLMs, classifying these into a detailed three-tiered taxonomy.
arXiv Detail & Related papers (2024-09-19T02:51:54Z) - A Survey on Emergent Language [9.823821010022932]
The paper provides a comprehensive review of 181 scientific publications on emergent language in artificial intelligence.
Its objective is to serve as a reference for researchers interested in or proficient in the field.
arXiv Detail & Related papers (2024-09-04T12:22:05Z) - Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - Traitement quantique des langues : {é}tat de l'art [0.0]
This article presents a review of quantum computing research works for Natural Language Processing (NLP)
Their goal is to improve the performance of current models, and to provide a better representation of several linguistic phenomena.
Several families of approaches are presented, including symbolic diagrammatic approaches, and hybrid neural networks.
arXiv Detail & Related papers (2024-04-09T08:05:15Z) - Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers [81.47046536073682]
We present a review and provide a unified perspective to summarize the recent progress as well as emerging trends in multilingual large language models (MLLMs) literature.
We hope our work can provide the community with quick access and spur breakthrough research in MLLMs.
arXiv Detail & Related papers (2024-04-07T11:52:44Z) - Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models [52.24001776263608]
This comprehensive survey delves into the recent strides in HS moderation.
We highlight the burgeoning role of large language models (LLMs) and large multimodal models (LMMs)
We identify existing gaps in research, particularly in the context of underrepresented languages and cultures.
arXiv Detail & Related papers (2024-01-30T03:51:44Z) - A Survey of Large Language Models [81.06947636926638]
Language modeling has been widely studied for language understanding and generation in the past two decades.
Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora.
To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.
arXiv Detail & Related papers (2023-03-31T17:28:46Z) - ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational
Finance Question Answering [70.6359636116848]
We propose a new large-scale dataset, ConvFinQA, to study the chain of numerical reasoning in conversational question answering.
Our dataset poses great challenge in modeling long-range, complex numerical reasoning paths in real-world conversations.
arXiv Detail & Related papers (2022-10-07T23:48:50Z) - Identifying Causal Influences on Publication Trends and Behavior: A Case
Study of the Computational Linguistics Community [10.791197825505755]
We present mixed-method analyses to investigate causal influences of publication trends and behavior.
Key findings highlight the transition to rapidly emerging methodologies in the research community.
We anticipate this work to provide useful insights about publication trends and behavior.
arXiv Detail & Related papers (2021-10-15T08:36:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.