Technosocial risks of ideal emotion recognition technologies: A defense of the (social) value of emotional expressions
- URL: http://arxiv.org/abs/2602.08706v1
- Date: Mon, 09 Feb 2026 14:20:42 GMT
- Title: Technosocial risks of ideal emotion recognition technologies: A defense of the (social) value of emotional expressions
- Authors: Alexandra Pregent,
- Abstract summary: I argue that the appeal of such systems rests on a misunderstanding of the social functions emotional expression.<n>ERTs threaten this expressive space by collapsing epistemic friction, displacing meaning with technology-mediated affective profiles, and narrowing the space for aspirational and role-sensitive expressions.<n>I argue that, although it is intuitive to think that increasing accuracy would legitimise such systems, in the case of ERTs accuracy does not straightforwardly justify their deployment, and may, in some contexts, provide a reason for regulatory restraint.
- Score: 51.56484100374058
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The prospect of AI systems that I call ideal emotion recognition technologies (ERTs) is often defended on the assumption that social life would benefit from increased affective transparency. This paper challenges that assumption by examining the technosocial risks posed by ideal ERTs, understood as multimodal systems capable of reliably inferring inner affective states in real time. Drawing on philosophical accounts of emotional expression and social practice, as well as empirical work in affective science and social psychology, I argue that the appeal of such systems rests on a misunderstanding of the social functions of emotional expression. Emotional expressions function not only as read-outs of inner states, but also as tools for coordinating action, enabling moral repair, sustaining interpersonal trust, and supporting collective norms. These functions depend on a background of partial opacity and epistemic friction. When deployed in socially authoritative or evaluative contexts, ideal ERTs threaten this expressive space by collapsing epistemic friction, displacing relational meaning with technology-mediated affective profiles, and narrowing the space for aspirational and role-sensitive expressions. The result is a drift towards affective determinism and ambient forms of affective auditing, which undermine both social cohesion and individual agency. I argue that, although it is intuitive to think that increasing accuracy would legitimise such systems, in the case of ERTs accuracy does not straightforwardly justify their deployment, and may, in some contexts, provide a reason for regulatory restraint. I conclude by defending a function-first regulatory approach that treats expressive discretion and intentional emotional expression as constitutive of certain social goods, and that accordingly seeks to protect these goods from excessive affective legibility.
Related papers
- ADEPT: RL-Aligned Agentic Decoding of Emotion via Evidence Probing Tools -- From Consensus Learning to Ambiguity-Driven Emotion Reasoning [67.22219034602514]
We introduce ADEPT (Agentic Decoding of Emotion via Evidence Probing Tools), a framework that reframes emotion recognition as a multi-turn inquiry process.<n> ADEPT transforms an SLLM into an agent that maintains an evolving candidate emotion set and adaptively invokes dedicated semantic and acoustic probing tools.<n>We show that ADEPT improves primary emotion accuracy in most settings while substantially improving minor emotion characterization.
arXiv Detail & Related papers (2026-02-13T08:33:37Z) - Towards Emotionally Intelligent and Responsible Reinforcement Learning [0.40719854602160227]
We propose a Responsible Reinforcement Learning framework that integrates emotional and contextual understanding with ethical considerations.<n>We introduce a multi-objective reward function that balances short-term behavioral engagement with long-term user well-being.<n>We discuss the implications of this approach for human-centric domains such as behavioral health, education, and digital therapeutics.
arXiv Detail & Related papers (2025-11-13T18:09:37Z) - Emotion-Coherent Reasoning for Multimodal LLMs via Emotional Rationale Verifier [53.55996102181836]
We propose the Emotional Rationale Verifier (ERV) and an Explanation Reward.<n>Our method guides the model to produce reasoning that is explicitly consistent with the target emotion.<n>We show that our approach not only enhances alignment between explanation and prediction but also empowers MLLMs to deliver emotionally coherent, trustworthy interactions.
arXiv Detail & Related papers (2025-10-27T16:40:17Z) - Wrong Face, Wrong Move: The Social Dynamics of Emotion Misperception in Agent-Based Models [2.0221069271989305]
The ability of humans to detect and respond to others' emotions is fundamental to understanding social behavior.<n>Here, agents are instantiated with emotion classifiers of varying accuracy to study the impact of perceptual accuracy on emergent emotional and spatial behavior.<n>Results show that low-accuracy classifiers reliably result in diminished trust, emotional disintegration into sadness, and disordered social organization.
arXiv Detail & Related papers (2025-08-26T22:42:46Z) - Feeling Machines: Ethics, Culture, and the Rise of Emotional AI [18.212492056071657]
This paper explores the growing presence of emotionally responsive artificial intelligence through a critical and interdisciplinary lens.<n>It explores how AI systems that simulate or interpret human emotions are reshaping our interactions in areas such as education, healthcare, mental health, caregiving, and digital life.<n>The analysis is structured around four central themes: the ethical implications of emotional AI, the cultural dynamics of human-machine interaction, the risks and opportunities for vulnerable populations, and the emerging regulatory, design, and technical considerations.
arXiv Detail & Related papers (2025-06-14T10:28:26Z) - Sentient Agent as a Judge: Evaluating Higher-Order Social Cognition in Large Language Models [75.85319609088354]
Sentient Agent as a Judge (SAGE) is an evaluation framework for large language models.<n>SAGE instantiates a Sentient Agent that simulates human-like emotional changes and inner thoughts during interaction.<n>SAGE provides a principled, scalable and interpretable tool for tracking progress toward genuinely empathetic and socially adept language agents.
arXiv Detail & Related papers (2025-05-01T19:06:10Z) - Emotions in Artificial Intelligence [0.0]
It is proposed that affect be interwoven with episodic memory by storing corresponding affective tags alongside all events.<n>This allows AIs to establish whether present situations resemble past events and project the associated emotional labels onto the current context.<n>The combined emotional state facilitates decision-making in the present by modulating action selection.
arXiv Detail & Related papers (2025-05-01T17:37:14Z) - From Rational Answers to Emotional Resonance: The Role of Controllable Emotion Generation in Language Models [16.350658746140788]
Large language models (LLMs) struggle to express emotions in a consistent, controllable, and contextually appropriate manner.<n>We propose a controllable emotion generation framework based on Emotion Vectors (EVs)<n>Our method enables fine-grained, continuous modulation of emotional tone without any additional training or architectural modification.
arXiv Detail & Related papers (2025-02-06T13:38:57Z) - Disambiguating Affective Stimulus Associations for Robot Perception and
Dialogue [67.89143112645556]
We provide a NICO robot with the ability to learn the associations between a perceived auditory stimulus and an emotional expression.
NICO is able to do this for both individual subjects and specific stimuli, with the aid of an emotion-driven dialogue system.
The robot is then able to use this information to determine a subject's enjoyment of perceived auditory stimuli in a real HRI scenario.
arXiv Detail & Related papers (2021-03-05T20:55:48Z) - Knowledge Bridging for Empathetic Dialogue Generation [52.39868458154947]
Lack of external knowledge makes empathetic dialogue systems difficult to perceive implicit emotions and learn emotional interactions from limited dialogue history.
We propose to leverage external knowledge, including commonsense knowledge and emotional lexical knowledge, to explicitly understand and express emotions in empathetic dialogue generation.
arXiv Detail & Related papers (2020-09-21T09:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.