Towards More Human-like AI Communication: A Review of Emergent
Communication Research
- URL: http://arxiv.org/abs/2308.02541v1
- Date: Tue, 1 Aug 2023 14:43:10 GMT
- Title: Towards More Human-like AI Communication: A Review of Emergent
Communication Research
- Authors: Nicolo' Brandizzi
- Abstract summary: Emergent communication (Emecom) is a field of research aiming to develop artificial agents capable of using natural language.
In this review, we delineate all the common proprieties we find across the literature and how they relate to human interactions.
We identify two subcategories and highlight their characteristics and open challenges.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the recent shift towards human-centric AI, the need for machines to
accurately use natural language has become increasingly important. While a
common approach to achieve this is to train large language models, this method
presents a form of learning misalignment where the model may not capture the
underlying structure and reasoning humans employ in using natural language,
potentially leading to unexpected or unreliable behavior. Emergent
communication (Emecom) is a field of research that has seen a growing number of
publications in recent years, aiming to develop artificial agents capable of
using natural language in a way that goes beyond simple discriminative tasks
and can effectively communicate and learn new concepts. In this review, we
present Emecom under two aspects. Firstly, we delineate all the common
proprieties we find across the literature and how they relate to human
interactions. Secondly, we identify two subcategories and highlight their
characteristics and open challenges. We encourage researchers to work together
by demonstrating that different methods can be viewed as diverse solutions to a
common problem and emphasize the importance of including diverse perspectives
and expertise in the field. We believe a deeper understanding of human
communication is crucial to developing machines that can accurately use natural
language in human-machine interactions.
Related papers
- Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Policy Learning with a Language Bottleneck [65.99843627646018]
Policy Learning with a Language Bottleneck (PLLBB) is a framework enabling AI agents to generate linguistic rules.
PLLBB alternates between a rule generation step guided by language models, and an update step where agents learn new policies guided by rules.
In a two-player communication game, a maze solving task, and two image reconstruction tasks, we show thatPLLBB agents are not only able to learn more interpretable and generalizable behaviors, but can also share the learned rules with human users.
arXiv Detail & Related papers (2024-05-07T08:40:21Z) - Distributed agency in second language learning and teaching through generative AI [0.0]
ChatGPT can provide informal second language practice through chats in written or voice forms.
Instructors can use AI to build learning and assessment materials in a variety of media.
arXiv Detail & Related papers (2024-03-29T14:55:40Z) - Unveiling the pressures underlying language learning and use in neural networks, large language models, and humans: Lessons from emergent machine-to-machine communication [5.371337604556311]
We review three cases where mismatches between the emergent linguistic behavior of neural agents and humans were resolved.
We identify key pressures at play for language learning and emergence: communicative success, production effort, learnability, and other psycho-/sociolinguistic factors.
arXiv Detail & Related papers (2024-03-21T14:33:34Z) - Transforming Human-Centered AI Collaboration: Redefining Embodied Agents
Capabilities through Interactive Grounded Language Instructions [23.318236094953072]
Human intelligence's adaptability is remarkable, allowing us to adjust to new tasks and multi-modal environments swiftly.
The research community is actively pursuing the development of interactive "embodied agents"
These agents must possess the ability to promptly request feedback in case communication breaks down or instructions are unclear.
arXiv Detail & Related papers (2023-05-18T07:51:33Z) - "No, to the Right" -- Online Language Corrections for Robotic
Manipulation via Shared Autonomy [70.45420918526926]
We present LILAC, a framework for incorporating and adapting to natural language corrections online during execution.
Instead of discrete turn-taking between a human and robot, LILAC splits agency between the human and robot.
We show that our corrections-aware approach obtains higher task completion rates, and is subjectively preferred by users.
arXiv Detail & Related papers (2023-01-06T15:03:27Z) - Collecting Interactive Multi-modal Datasets for Grounded Language
Understanding [66.30648042100123]
We formalized the collaborative embodied agent using natural language task.
We developed a tool for extensive and scalable data collection.
We collected the first dataset for interactive grounded language understanding.
arXiv Detail & Related papers (2022-11-12T02:36:32Z) - Building Human-like Communicative Intelligence: A Grounded Perspective [1.0152838128195465]
After making astounding progress in language learning, AI systems seem to approach the ceiling that does not reflect important aspects of human communicative capacities.
This paper suggests that the dominant cognitively-inspired AI directions, based on nativist and symbolic paradigms, lack necessary substantiation and concreteness to guide progress in modern AI.
I propose a list of concrete, implementable components for building "grounded" linguistic intelligence.
arXiv Detail & Related papers (2022-01-02T01:43:24Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Emergent Multi-Agent Communication in the Deep Learning Era [26.764052787245728]
The ability to cooperate through language is a defining feature of humans.
As the perceptual, motory and planning capabilities of deep artificial networks increase, researchers are studying whether they also can develop a shared language to interact.
arXiv Detail & Related papers (2020-06-03T17:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.