Analyzing Team Performance with Embeddings from Multiparty Dialogues
- URL: http://arxiv.org/abs/2101.09421v1
- Date: Sat, 23 Jan 2021 05:18:12 GMT
- Title: Analyzing Team Performance with Embeddings from Multiparty Dialogues
- Authors: Ayesha Enayet and Gita Sukthankar
- Abstract summary: This paper examines the problem of predicting team performance from embeddings learned from multiparty dialogues.
Unlike syntactic entrainment, both dialogue act and sentiment embeddings are effective for classifying team performance, even during the initial phase.
- Score: 1.8275108630751844
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Good communication is indubitably the foundation of effective teamwork. Over
time teams develop their own communication styles and often exhibit
entrainment, a conversational phenomena in which humans synchronize their
linguistic choices. This paper examines the problem of predicting team
performance from embeddings learned from multiparty dialogues such that teams
with similar conflict scores lie close to one another in vector space.
Embeddings were extracted from three types of features: 1) dialogue acts 2)
sentiment polarity 3) syntactic entrainment. Although all of these features can
be used to effectively predict team performance, their utility varies by the
teamwork phase. We separate the dialogues of players playing a cooperative game
into stages: 1) early (knowledge building) 2) middle (problem-solving) and 3)
late (culmination). Unlike syntactic entrainment, both dialogue act and
sentiment embeddings are effective for classifying team performance, even
during the initial phase. This finding has potential ramifications for the
development of conversational agents that facilitate teaming.
Related papers
- Modeling Communication Perception in Development Teams Using Monte Carlo Methods [1.8369669715149237]
Mood surveys enable the early detection of underlying tensions or dissatisfaction within development teams.
This paper analyzes the diversity of perceptions within arbitrary development teams.
We present a preliminary mathematical model to calculate the minimum agreement among a subset of developers.
arXiv Detail & Related papers (2025-04-24T14:35:18Z) - ML-SPEAK: A Theory-Guided Machine Learning Method for Studying and Predicting Conversational Turn-taking Patterns [25.049072387358244]
We develop a computational model of conversational turn-taking within self-organized teams.
By bridging the gap between individual personality traits and team communication patterns, our model has the potential to inform theories of team processes.
arXiv Detail & Related papers (2024-11-23T01:27:01Z) - GOMA: Proactive Embodied Cooperative Communication via Goal-Oriented Mental Alignment [72.96949760114575]
We propose a novel cooperative communication framework, Goal-Oriented Mental Alignment (GOMA)
GOMA formulates verbal communication as a planning problem that minimizes the misalignment between parts of agents' mental states that are relevant to the goals.
We evaluate our approach against strong baselines in two challenging environments, Overcooked (a multiplayer game) and VirtualHome (a household simulator)
arXiv Detail & Related papers (2024-03-17T03:52:52Z) - Informational Diversity and Affinity Bias in Team Growth Dynamics [6.729250803621849]
We show that the benefits of informational diversity are in tension with affinity bias.
Our results formalize a fundamental limitation of utility-based motivations to drive informational diversity.
arXiv Detail & Related papers (2023-01-28T05:02:40Z) - Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria [57.74495091445414]
Social deduction games offer an avenue to study how individuals might learn to synthesize potentially unreliable information about others.
In this work, we present Hidden Agenda, a two-team social deduction game that provides a 2D environment for studying learning agents in scenarios of unknown team alignment.
Reinforcement learning agents trained in Hidden Agenda show that agents can learn a variety of behaviors, including partnering and voting without need for communication in natural language.
arXiv Detail & Related papers (2022-01-05T20:54:10Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Structural Pre-training for Dialogue Comprehension [51.215629336320305]
We present SPIDER, Structural Pre-traIned DialoguE Reader, to capture dialogue exclusive features.
To simulate the dialogue-like features, we propose two training objectives in addition to the original LM objectives.
Experimental results on widely used dialogue benchmarks verify the effectiveness of the newly introduced self-supervised tasks.
arXiv Detail & Related papers (2021-05-23T15:16:54Z) - My Team Will Go On: Differentiating High and Low Viability Teams through
Team Interaction [17.729317295204368]
We train a viability classification model over a dataset of 669 10-minute text conversations of online teams.
We find that a lasso regression model achieves an accuracy of.74--.92 AUC ROC under different thresholds of classifying viability scores.
arXiv Detail & Related papers (2020-10-14T21:33:36Z) - Will I Sound Like Me? Improving Persona Consistency in Dialogues through
Pragmatic Self-Consciousness [62.55060760615656]
Recent models tackling consistency often train with additional Natural Language Inference (NLI) labels or attach trained extra modules to the generative agent for maintaining consistency.
Inspired by social cognition and pragmatics, we endow existing dialogue agents with public self-consciousness on the fly through an imaginary listener.
Our approach, based on the Rational Speech Acts framework, can enforce dialogue agents to refrain from uttering contradiction.
arXiv Detail & Related papers (2020-04-13T08:16:16Z) - On Emergent Communication in Competitive Multi-Agent Teams [116.95067289206919]
We investigate whether competition for performance from an external, similar agent team could act as a social influence.
Our results show that an external competitive influence leads to improved accuracy and generalization, as well as faster emergence of communicative languages.
arXiv Detail & Related papers (2020-03-04T01:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.