Efficiency Without Cognitive Change: Evidence from Human Interaction with Narrow AI Systems
- URL: http://arxiv.org/abs/2510.24893v1
- Date: Tue, 28 Oct 2025 18:55:44 GMT
- Title: Efficiency Without Cognitive Change: Evidence from Human Interaction with Narrow AI Systems
- Authors: María Angélica Benítez, Rocío Candela Ceballos, Karina Del Valle Molina, Sofía Mundo Araujo, Sofía Evangelina Victorio Villaroel, Nadia Justel,
- Abstract summary: Short-term exposure to narrow AI tools enhances core cognitive abilities.<n>No significant pre-post differences emerged in standardized measures of problem solving or verbal comprehension.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing integration of artificial intelligence (AI) into human cognition raises a fundamental question: does AI merely improve efficiency, or does it alter how we think? This study experimentally tested whether short-term exposure to narrow AI tools enhances core cognitive abilities or simply optimizes task performance. Thirty young adults completed standardized neuropsychological assessments embedded in a seven-week protocol with a four-week online intervention involving problem-solving and verbal comprehension tasks, either with or without AI support (ChatGPT). While AI-assisted participants completed several tasks faster and more accurately, no significant pre-post differences emerged in standardized measures of problem solving or verbal comprehension. These results demonstrate efficiency gains without cognitive change, suggesting that current narrow AI systems serve as cognitive scaffolds extending performance without transforming underlying mental capacities. The findings highlight the need for ethical and educational frameworks that promote critical and autonomous thinking in an increasingly AI-augmented cognitive ecology.
Related papers
- Bridging Minds and Machines: Toward an Integration of AI and Cognitive Science [48.38628297686686]
Cognitive Science has profoundly shaped disciplines such as Artificial Intelligence (AI), Philosophy, Psychology, Neuroscience, Linguistics, and Culture.<n>Many breakthroughs in AI trace their roots to cognitive theories, while AI itself has become an indispensable tool for advancing cognitive research.<n>We argue that the future of AI within Cognitive Science lies not only in improving performance but also in constructing systems that deepen our understanding of the human mind.
arXiv Detail & Related papers (2025-08-28T11:26:17Z) - Do Students Rely on AI? Analysis of Student-ChatGPT Conversations from a Field Study [10.71612026319996]
This study analyzed 315 student-AI conversations during a brief, quiz-based scenario across various STEM courses.<n>Students exhibited overall low reliance on AI and many of them could not effectively use AI for learning.<n>Certain behavioral metrics strongly predicted AI reliance, highlighting potential behavioral mechanisms to explain AI adoption.
arXiv Detail & Related papers (2025-08-27T20:00:27Z) - AI Literacy as a Key Driver of User Experience in AI-Powered Assessment: Insights from Socratic Mind [2.0272430076690027]
This study examines how students' AI literacy and prior exposure to AI technologies shape their perceptions of Socratic Mind.<n>Data from 309 undergraduates in Computer Science and Business courses were collected.
arXiv Detail & Related papers (2025-07-29T10:11:24Z) - ChatGPT produces more "lazy" thinkers: Evidence of cognitive engagement decline [0.0]
This study investigates the impact of generative artificial intelligence (AI) tools on the cognitive engagement of students during academic writing tasks.<n>The results revealed significantly lower cognitive engagement scores in the ChatGPT group compared to the control group.<n>These findings suggest that AI assistance may lead to cognitive offloading.
arXiv Detail & Related papers (2025-06-30T18:41:50Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
AI technology can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals.<n>The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment.<n>A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Advancing Perception in Artificial Intelligence through Principles of
Cognitive Science [6.637438611344584]
We focus on the cognitive functions of perception, which is the process of taking signals from one's surroundings as input, and processing them to understand the environment.
We present a collection of methods in AI for researchers to build AI systems inspired by cognitive science.
arXiv Detail & Related papers (2023-10-13T01:21:55Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Human Decision Makings on Curriculum Reinforcement Learning with
Difficulty Adjustment [52.07473934146584]
We guide the curriculum reinforcement learning results towards a preferred performance level that is neither too hard nor too easy via learning from the human decision process.
Our system is highly parallelizable, making it possible for a human to train large-scale reinforcement learning applications.
It shows reinforcement learning performance can successfully adjust in sync with the human desired difficulty level.
arXiv Detail & Related papers (2022-08-04T23:53:51Z) - To Trust or to Think: Cognitive Forcing Functions Can Reduce
Overreliance on AI in AI-assisted Decision-making [4.877174544937129]
People supported by AI-powered decision support tools frequently overrely on the AI.
Adding explanations to the AI decisions does not appear to reduce the overreliance.
Our research suggests that human cognitive motivation moderates the effectiveness of explainable AI solutions.
arXiv Detail & Related papers (2021-02-19T00:38:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.