On the Influence of Artificial Intelligence on Human Problem-Solving: Empirical Insights for the Third Wave in a Multinational Longitudinal Pilot Study
- URL: http://arxiv.org/abs/2511.11738v1
- Date: Thu, 13 Nov 2025 10:20:07 GMT
- Title: On the Influence of Artificial Intelligence on Human Problem-Solving: Empirical Insights for the Third Wave in a Multinational Longitudinal Pilot Study
- Authors: Matthias Huemmer, Theophile Shyiramunda, Franziska Durner, Michelle J. Cummings-Koether,
- Abstract summary: This article investigates the evolving paradigm of human-AI collaboration in problem-solving contexts.<n>Building upon previous waves, our findings reveal the consolidation of a hybrid problem-solving culture.<n>The study concludes that educational and technological interventions must prioritize verification scaffolds.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article presents the results and their discussion for the third wave (with n=23 participants) within a multinational longitudinal study that investigates the evolving paradigm of human-AI collaboration in problem-solving contexts. Building upon previous waves, our findings reveal the consolidation of a hybrid problem-solving culture characterized by strategic integration of AI tools within structured cognitive workflows. The data demonstrate near-universal AI adoption (95.7% with prior knowledge, 100% ChatGPT usage) primarily deployed through human-led sequences such as "Think, Internet, ChatGPT, Further Processing" (39.1%). However, this collaboration reveals a critical verification deficit that escalates with problem complexity. We empirically identify and quantify two systematic epistemic gaps: a belief-performance gap (up to +80.8 percentage points discrepancy between perceived and actual correctness) and a proof-belief gap (up to -16.8 percentage points between confidence and verification capability). These findings, derived from behavioral data and problem vignettes across complexity levels, indicate that the fundamental constraint on reliable AI-assisted work is solution validation rather than generation. The study concludes that educational and technological interventions must prioritize verification scaffolds (including assumption documentation protocols, adequacy criteria checklists, and triangulation procedures) to fortify the human role as critical validator in this new cognitive ecosystem.
Related papers
- AI, Metacognition, and the Verification Bottleneck: A Three-Wave Longitudinal Study of Human Problem-Solving [0.0]
This pilot study tracked how generative AI reshapes problem-solving over six months in an academic setting.<n>Results generalize primarily to early-adopter, academically affiliated populations.
arXiv Detail & Related papers (2026-01-21T15:49:04Z) - Epistemology gives a Future to Complementarity in Human-AI Interactions [42.371764229953165]
complementarity is the claim that a human supported by an AI system can outperform either alone in a decision-making process.<n>We argue that historical instances of complementarity function as evidence that a given human-AI interaction is a reliable process.
arXiv Detail & Related papers (2026-01-14T21:04:28Z) - AI Deception: Risks, Dynamics, and Controls [153.71048309527225]
This project provides a comprehensive and up-to-date overview of the AI deception field.<n>We identify a formal definition of AI deception, grounded in signaling theory from studies of animal deception.<n>We organize the landscape of AI deception research as a deception cycle, consisting of two key components: deception emergence and deception treatment.
arXiv Detail & Related papers (2025-11-27T16:56:04Z) - Embracing Trustworthy Brain-Agent Collaboration as Paradigm Extension for Intelligent Assistive Technologies [51.93721053301417]
This paper argues that the field is poised for a paradigm extension from Brain-Computer Interfaces to Brain-Agent Collaboration.<n>We emphasize reframing agents as active and collaborative partners for intelligent assistance rather than passive brain signal data processors.
arXiv Detail & Related papers (2025-10-25T00:25:45Z) - QUINTA: Reflexive Sensibility For Responsible AI Research and Data-Driven Processes [2.504366738288215]
This paper presents a comprehensive framework grounded in critical reflexivity as intersectional praxis.<n>The framework centers researcher reflexivity to call attention to the AI researchers' power in creating and analyzing AI/DS artifacts through data-centric approaches.
arXiv Detail & Related papers (2025-09-19T18:40:30Z) - Interaction as Intelligence: Deep Research With Human-AI Partnership [25.28272178646003]
"Interaction as Intelligence" research series presents a reconceptualization of human-AI relationships in deep research tasks.<n>We introduce Deep Cognition, a system that transforms the human role from giving instructions to cognitive oversight.
arXiv Detail & Related papers (2025-07-21T16:15:18Z) - Opting Out of Generative AI: a Behavioral Experiment on the Role of Education in Perplexity AI Avoidance [0.0]
This study investigates whether differences in formal education are associated with CAI avoidance.<n>Findings underscore education's central role in shaping AI adoption and the role of self-selection biases in AI-related research.
arXiv Detail & Related papers (2025-07-10T16:05:11Z) - When Models Know More Than They Can Explain: Quantifying Knowledge Transfer in Human-AI Collaboration [79.69935257008467]
We introduce Knowledge Integration and Transfer Evaluation (KITE), a conceptual and experimental framework for Human-AI knowledge transfer capabilities.<n>We conduct the first large-scale human study (N=118) explicitly designed to measure it.<n>In our two-phase setup, humans first ideate with an AI on problem-solving strategies, then independently implement solutions, isolating model explanations' influence on human understanding.
arXiv Detail & Related papers (2025-06-05T20:48:16Z) - Data Fusion for Partial Identification of Causal Effects [62.56890808004615]
We propose a novel partial identification framework that enables researchers to answer key questions.<n>Is the causal effect positive or negative? and How severe must assumption violations be to overturn this conclusion?<n>We apply our framework to the Project STAR study, which investigates the effect of classroom size on students' third-grade standardized test performance.
arXiv Detail & Related papers (2025-05-30T07:13:01Z) - Identifying Trustworthiness Challenges in Deep Learning Models for Continental-Scale Water Quality Prediction [69.38041171537573]
Water quality is foundational to environmental sustainability, ecosystem resilience, and public health.<n>Deep learning offers transformative potential for large-scale water quality prediction and scientific insights generation.<n>Their widespread adoption in high-stakes operational decision-making, such as pollution mitigation and equitable resource allocation, is prevented by unresolved trustworthiness challenges.
arXiv Detail & Related papers (2025-03-13T01:50:50Z) - Algorithmic Identification of Essential Exogenous Nodes for Causal Sufficiency in Brain Networks [1.9874264019909988]
In the investigation of any causal mechanisms, such as the brain's causal networks, the assumption of causal sufficiency plays a critical role.
We propose an algorithmic identification approach for determining essential nodes that satisfy the critical need for causal sufficiency to adhere to it in such inquiries.
arXiv Detail & Related papers (2024-03-08T16:05:47Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.