AI's Social Forcefield: Reshaping Distributed Cognition in Human-AI Teams
- URL: http://arxiv.org/abs/2407.17489v2
- Date: Thu, 30 Oct 2025 15:09:59 GMT
- Title: AI's Social Forcefield: Reshaping Distributed Cognition in Human-AI Teams
- Authors: Christoph Riedl, Saiph Savage, Josie Zvelebilova,
- Abstract summary: We show that AI actively reshapes the social and cognitive fabric of collaboration.<n>We show that AI participation reorganizes the distributed cognitive architecture of teams.<n>We argue for rethinking AI in teams as a socially influential actor.
- Score: 6.386909552513031
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI is not only a neutral tool in team settings; it actively reshapes the social and cognitive fabric of collaboration. We advance a unified framework of alignment in distributed cognition in human-AI teams -- a process through which linguistic, cognitive, and social coordination emerge as human and AI agents co-construct a shared representational space. Across two studies, we show that exposure to AI-generated language shapes not only how people speak, but also how they think, what they attend to, and how they relate to each other. Together, these findings reveal how AI participation reorganizes the distributed cognitive architecture of teams: AI systems function as implicit social forcefields. Our findings highlight the double-edged impact of AI: the same mechanisms that enable efficient collaboration can also erode epistemic diversity and undermine natural alignment processes. We argue for rethinking AI in teams as a socially influential actor and call for new design paradigms that foreground transparency, controllability, and group-level dynamics to foster responsible, productive human-AI collaboration.
Related papers
- Bridging Minds and Machines: Toward an Integration of AI and Cognitive Science [48.38628297686686]
Cognitive Science has profoundly shaped disciplines such as Artificial Intelligence (AI), Philosophy, Psychology, Neuroscience, Linguistics, and Culture.<n>Many breakthroughs in AI trace their roots to cognitive theories, while AI itself has become an indispensable tool for advancing cognitive research.<n>We argue that the future of AI within Cognitive Science lies not only in improving performance but also in constructing systems that deepen our understanding of the human mind.
arXiv Detail & Related papers (2025-08-28T11:26:17Z) - From Passive Tool to Socio-cognitive Teammate: A Conceptual Framework for Agentic AI in Human-AI Collaborative Learning [0.0]
We present a novel conceptual framework that charts the transition from AI as a tool to AI as a collaborative partner.<n>We examine whether an AI, lacking genuine consciousness or shared intentionality, can be considered a true collaborator.<n>This distinction has significant implications for pedagogy, instructional design, and the future research agenda for AI in education.
arXiv Detail & Related papers (2025-08-20T16:17:32Z) - Unraveling Human-AI Teaming: A Review and Outlook [2.3396455015352258]
Artificial Intelligence (AI) is advancing at an unprecedented pace, with clear potential to enhance decision-making and productivity.<n>Yet, the collaborative decision-making process between humans and AI remains underdeveloped, often falling short of its transformative possibilities.<n>This paper explores the evolution of AI agents from passive tools to active collaborators in human-AI teams, emphasizing their ability to learn, adapt, and operate autonomously in complex environments.
arXiv Detail & Related papers (2025-04-08T07:37:25Z) - Actionable AI: Enabling Non Experts to Understand and Configure AI Systems [5.534140394498714]
Actionable AI allows non-experts to configure black-box agents.
In uncertain conditions, non-experts achieve good levels of performance.
We propose Actionable AI as a way to open access to AI-based agents.
arXiv Detail & Related papers (2025-03-09T23:09:04Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
AI technology can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals.<n>The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment.<n>A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Artificial Theory of Mind and Self-Guided Social Organisation [1.8434042562191815]
One of the challenges artificial intelligence (AI) faces is how a collection of agents coordinate their behaviour to achieve goals that are not reachable by any single agent.
We make the case for collective intelligence in a general setting, drawing on recent work from single neuron complexity in neural networks.
We show how our social structures are influenced by our neuro-physiology, our psychology, and our language.
arXiv Detail & Related papers (2024-11-14T04:06:26Z) - Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Shifting the Human-AI Relationship: Toward a Dynamic Relational Learning-Partner Model [0.0]
We advocate for a shift toward viewing AI as a learning partner, akin to a student who learns from interactions with humans.
We suggest that a "third mind" emerges through collaborative human-AI relationships.
arXiv Detail & Related papers (2024-10-07T19:19:39Z) - Comparing Zealous and Restrained AI Recommendations in a Real-World Human-AI Collaboration Task [11.040918613968854]
We argue that careful exploitation of the tradeoff between precision and recall can significantly improve team performance.
We analyze the performance of 78 professional annotators working with a) no AI assistance, b) a high-precision "restrained" AI, and c) a high-recall "zealous" AI in over 3,466 person-hours of annotation work.
arXiv Detail & Related papers (2024-10-06T23:19:19Z) - Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions [67.60397632819202]
Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal.
We identify a set of underlying technical challenges and open questions for researchers across computing communities to advance Social-AI.
arXiv Detail & Related papers (2024-04-17T02:57:42Z) - On the Emergence of Symmetrical Reality [51.21203247240322]
We introduce the symmetrical reality framework, which offers a unified representation encompassing various forms of physical-virtual amalgamations.
We propose an instance of an AI-driven active assistance service that illustrates the potential applications of symmetrical reality.
arXiv Detail & Related papers (2024-01-26T16:09:39Z) - Applying HCAI in developing effective human-AI teaming: A perspective
from human-AI joint cognitive systems [10.746728034149989]
Research and application have used human-AI teaming (HAT) as a new paradigm to develop AI systems.
We elaborate on our proposed conceptual framework of human-AI joint cognitive systems (HAIJCS)
We propose a conceptual framework of human-AI joint cognitive systems (HAIJCS) to represent and implement HAT.
arXiv Detail & Related papers (2023-07-08T06:26:38Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Competent but Rigid: Identifying the Gap in Empowering AI to Participate
Equally in Group Decision-Making [25.913473823070863]
Existing research on human-AI collaborative decision-making focuses mainly on the interaction between AI and individual decision-makers.
This paper presents a wizard-of-oz study in which two participants and an AI form a committee to rank three English essays.
arXiv Detail & Related papers (2023-02-17T11:07:17Z) - Improving Human-AI Collaboration With Descriptions of AI Behavior [14.904401331154062]
People work with AI systems to improve their decision making, but often under- or over-rely on AI predictions and perform worse than they would have unassisted.
To help people appropriately rely on AI aids, we propose showing them behavior descriptions.
arXiv Detail & Related papers (2023-01-06T00:33:08Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Does the Whole Exceed its Parts? The Effect of AI Explanations on
Complementary Team Performance [44.730580857733]
Prior studies observed improvements from explanations only when the AI, alone, outperformed both the human and the best team.
We conduct mixed-method user studies on three datasets, where an AI with accuracy comparable to humans helps participants solve a task.
We find explanations increase the chance that humans will accept the AI's recommendation, regardless of its correctness.
arXiv Detail & Related papers (2020-06-26T03:34:04Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.