Human Resilience in the AI Era -- What Machines Can't Replace
- URL: http://arxiv.org/abs/2510.25218v1
- Date: Wed, 29 Oct 2025 06:48:19 GMT
- Title: Human Resilience in the AI Era -- What Machines Can't Replace
- Authors: Shaoshan Liu, Anina Schwarzenbach, Yiyu Shi,
- Abstract summary: We argue that the decisive human countermeasure is resilience.<n>We show that resilience can be cultivated through training that complements rather than substitutes for structural safeguards.
- Score: 3.980911940312619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI is displacing tasks, mediating high-stakes decisions, and flooding communication with synthetic content, unsettling work, identity, and social trust. We argue that the decisive human countermeasure is resilience. We define resilience across three layers: psychological, including emotion regulation, meaning-making, cognitive flexibility; social, including trust, social capital, coordinated response; organizational, including psychological safety, feedback mechanisms, and graceful degradation. We synthesize early evidence that these capacities buffer individual strain, reduce burnout through social support, and lower silent failure in AI-mediated workflows through team norms and risk-responsive governance. We also show that resilience can be cultivated through training that complements rather than substitutes for structural safeguards. By reframing the AI debate around actionable human resilience, this article offers policymakers, educators, and operators a practical lens to preserve human agency and steer responsible adoption.
Related papers
- Technosocial risks of ideal emotion recognition technologies: A defense of the (social) value of emotional expressions [51.56484100374058]
I argue that the appeal of such systems rests on a misunderstanding of the social functions emotional expression.<n>ERTs threaten this expressive space by collapsing epistemic friction, displacing meaning with technology-mediated affective profiles, and narrowing the space for aspirational and role-sensitive expressions.<n>I argue that, although it is intuitive to think that increasing accuracy would legitimise such systems, in the case of ERTs accuracy does not straightforwardly justify their deployment, and may, in some contexts, provide a reason for regulatory restraint.
arXiv Detail & Related papers (2026-02-09T14:20:42Z) - Conformity and Social Impact on AI Agents [42.04722694386303]
This study examines conformity, the tendency to align with group opinions under social pressure, in large multimodal language models functioning as AI agents.<n>Our experiments reveal that AI agents exhibit a systematic conformity bias, aligned with Social Impact Theory, showing sensitivity to group size, unanimity, task difficulty, and source characteristics.<n>These findings reveal fundamental security vulnerabilities in AI agent decision-making that could enable malicious manipulation, misinformation campaigns, and bias propagation in multi-agent systems.
arXiv Detail & Related papers (2026-01-08T21:16:28Z) - Feeling Machines: Ethics, Culture, and the Rise of Emotional AI [18.212492056071657]
This paper explores the growing presence of emotionally responsive artificial intelligence through a critical and interdisciplinary lens.<n>It explores how AI systems that simulate or interpret human emotions are reshaping our interactions in areas such as education, healthcare, mental health, caregiving, and digital life.<n>The analysis is structured around four central themes: the ethical implications of emotional AI, the cultural dynamics of human-machine interaction, the risks and opportunities for vulnerable populations, and the emerging regulatory, design, and technical considerations.
arXiv Detail & Related papers (2025-06-14T10:28:26Z) - Truly Self-Improving Agents Require Intrinsic Metacognitive Learning [59.60803539959191]
Self-improving agents aim to continuously acquire new capabilities with minimal supervision.<n>Current approaches face two key limitations: their self-improvement processes are often rigid, fail to generalize across tasks domains, and struggle to scale with increasing agent capabilities.<n>We argue that effective self-improvement requires intrinsic metacognitive learning, defined as an agent's intrinsic ability to actively evaluate, reflect on, and adapt its own learning processes.
arXiv Detail & Related papers (2025-06-05T14:53:35Z) - Neural Brain: A Neuroscience-inspired Framework for Embodied Agents [78.61382193420914]
Current AI systems, such as large language models, remain disembodied, unable to physically engage with the world.<n>At the core of this challenge lies the concept of Neural Brain, a central intelligence system designed to drive embodied agents with human-like adaptability.<n>This paper introduces a unified framework for the Neural Brain of embodied agents, addressing two fundamental challenges.
arXiv Detail & Related papers (2025-05-12T15:05:34Z) - The Limits of AI in Financial Services [0.0]
AI is transforming industries, raising concerns about job displacement and decision making reliability.<n>EPOCH framework highlights five irreplaceable human capabilities: Empathy, Presence, Opinion, Creativity, and Hope.<n>Challenge is ensuring professionals adapt, leveraging AI's strengths while preserving essential human capabilities.
arXiv Detail & Related papers (2025-03-27T23:04:11Z) - Why human-AI relationships need socioaffective alignment [16.283971225367537]
Humans strive to design safe AI systems that align with our goals and remain under our control.<n>As AI capabilities advance, we face a new challenge: the emergence of deeper, more persistent relationships between humans and AI systems.
arXiv Detail & Related papers (2025-02-04T17:50:08Z) - Emergence of human-like polarization among large language model agents [79.96817421756668]
We simulate a networked system involving thousands of large language model agents, discovering their social interactions, result in human-like polarization.<n>Similarities between humans and LLM agents raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate polarization and its consequences.
arXiv Detail & Related papers (2025-01-09T11:45:05Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Enhancing Mental Health Support through Human-AI Collaboration: Toward Secure and Empathetic AI-enabled chatbots [0.0]
This paper explores the potential of AI-enabled chatbots as a scalable solution.
We assess their ability to deliver empathetic, meaningful responses in mental health contexts.
We propose a federated learning framework that ensures data privacy, reduces bias, and integrates continuous validation from clinicians to enhance response quality.
arXiv Detail & Related papers (2024-09-17T20:49:13Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - The Promise and Peril of Artificial Intelligence -- Violet Teaming
Offers a Balanced Path Forward [56.16884466478886]
This paper reviews emerging issues with opaque and uncontrollable AI systems.
It proposes an integrative framework called violet teaming to develop reliable and responsible AI.
It emerged from AI safety research to manage risks proactively by design.
arXiv Detail & Related papers (2023-08-28T02:10:38Z) - Beyond Robustness: A Taxonomy of Approaches towards Resilient
Multi-Robot Systems [41.71459547415086]
We analyze how resilience is achieved in networks of agents and multi-robot systems.
We argue that resilience must become a central engineering design consideration.
arXiv Detail & Related papers (2021-09-25T11:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.