Self-Anchored Attention Model for Sample-Efficient Classification of Prosocial Text Chat
- URL: http://arxiv.org/abs/2506.09259v1
- Date: Tue, 10 Jun 2025 21:40:54 GMT
- Title: Self-Anchored Attention Model for Sample-Efficient Classification of Prosocial Text Chat
- Authors: Zhuofang Li, Rafal Kocielnik, Fereshteh Soltani, Penphob, Boonyarungsrit, Animashree Anandkumar, R. Michael Alvarez,
- Abstract summary: This research is novel in applying NLP techniques to discover and classify prosocial behaviors in player in-game chat communication.<n>It can help shift the focus of moderation from solely penalizing toxicity to actively encouraging positive interactions on online platforms.
- Score: 44.52122332148653
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Millions of players engage daily in competitive online games, communicating through in-game chat. Prior research has focused on detecting relatively small volumes of toxic content using various Natural Language Processing (NLP) techniques for the purpose of moderation. However, recent studies emphasize the importance of detecting prosocial communication, which can be as crucial as identifying toxic interactions. Recognizing prosocial behavior allows for its analysis, rewarding, and promotion. Unlike toxicity, there are limited datasets, models, and resources for identifying prosocial behaviors in game-chat text. In this work, we employed unsupervised discovery combined with game domain expert collaboration to identify and categorize prosocial player behaviors from game chat. We further propose a novel Self-Anchored Attention Model (SAAM) which gives 7.9% improvement compared to the best existing technique. The approach utilizes the entire training set as "anchors" to help improve model performance under the scarcity of training data. This approach led to the development of the first automated system for classifying prosocial behaviors in in-game chats, particularly given the low-resource settings where large-scale labeled data is not available. Our methodology was applied to one of the most popular online gaming titles - Call of Duty(R): Modern Warfare(R)II, showcasing its effectiveness. This research is novel in applying NLP techniques to discover and classify prosocial behaviors in player in-game chat communication. It can help shift the focus of moderation from solely penalizing toxicity to actively encouraging positive interactions on online platforms.
Related papers
- Context-Aware Toxicity Detection in Multiplayer Games: Integrating Domain-Adaptive Pretraining and Match Metadata [0.9702021668898856]
Traditional toxicity detectors focus on isolated messages, missing the broader context needed for accurate moderation.<n>This is especially problematic in video games, where interactions involve specialized slang, abbreviations, and typos.<n>We adapted RoBERTa LLM to support moderation tailored to video games, integrating both textual and non-textual context.
arXiv Detail & Related papers (2025-04-02T09:21:41Z) - Reinforcement Learning for Efficient Toxicity Detection in Competitive Online Video Games [1.9201314880477047]
This article considers the problem of efficient sampling for toxicity detection in competitive online video games.<n>We propose a contextual bandit algorithm that makes monitoring decisions based on variables associated with toxic behavior.<n>Using data from the popular first-person action game Call of Duty: Modern Warfare III, we show that our algorithm consistently outperforms baseline algorithms.
arXiv Detail & Related papers (2025-03-26T20:13:30Z) - Training Language Models for Social Deduction with Multi-Agent Reinforcement Learning [31.196865401472664]
We train language models to have productive discussions about their environment in natural language without any human demonstrations.<n>We leverage the agent's goal to predict useful information about the world as a dense reward signal that guides communication.<n>We analyze emergent behaviors due to our technique, such as accusing suspects and providing evidence, and find that it enables strong discussions.
arXiv Detail & Related papers (2025-02-09T22:44:45Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Fine-Tuning Pre-trained Language Models to Detect In-Game Trash Talks [0.0]
The study employs and evaluates the performance of pre-trained BERT and GPT language models in detecting toxicity within in-game chats.
The study was able to collect around two thousand in-game chats to train and test BERT (Base-uncased), BERT (Large-uncased), and GPT-3 models.
arXiv Detail & Related papers (2024-03-19T11:36:53Z) - Learning to Influence Human Behavior with Offline Reinforcement Learning [70.7884839812069]
We focus on influence in settings where there is a need to capture human suboptimality.
Experiments online with humans is potentially unsafe, and creating a high-fidelity simulator of the environment is often impractical.
We show that offline reinforcement learning can learn to effectively influence suboptimal humans by extending and combining elements of observed human-human behavior.
arXiv Detail & Related papers (2023-03-03T23:41:55Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This work proposes a novel reinforcement learning mechanism based on the social impact of rivalry behavior.
Our proposed model aggregates objective and social perception mechanisms to derive a rivalry score that is used to modulate the learning of artificial agents.
arXiv Detail & Related papers (2022-08-22T14:06:06Z) - Aligning to Social Norms and Values in Interactive Narratives [89.82264844526333]
We focus on creating agents that act in alignment with socially beneficial norms and values in interactive narratives or text-based games.
We introduce the GALAD agent that uses the social commonsense knowledge present in specially trained language models to contextually restrict its action space to only those actions that are aligned with socially beneficial values.
arXiv Detail & Related papers (2022-05-04T09:54:33Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This study focuses on providing a novel learning mechanism based on a rivalry social impact.
Based on the concept of competitive rivalry, our analysis aims to investigate if we can change the assessment of these agents from a human perspective.
arXiv Detail & Related papers (2020-11-02T21:54:18Z) - Recognizing Affiliation: Using Behavioural Traces to Predict the Quality
of Social Interactions in Online Games [26.131859388185646]
We use behavioural traces to predict affiliation between dyadic strangers, facilitated through their social interactions in an online gaming setting.
We collected audio, video, in-game, and self-report data from 23 dyads, extracted 75 features, trained Random Forest and Support Vector Machine models, and evaluated their performance predicting binary (high/low) as well as continuous affiliation toward a partner.
Our findings can inform the design of multiplayer games and game communities, and guide the development of systems for matchmaking and mitigating toxic behaviour in online games.
arXiv Detail & Related papers (2020-03-06T20:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.