Opting Out of Generative AI: a Behavioral Experiment on the Role of Education in Perplexity AI Avoidance
- URL: http://arxiv.org/abs/2507.07881v1
- Date: Thu, 10 Jul 2025 16:05:11 GMT
- Title: Opting Out of Generative AI: a Behavioral Experiment on the Role of Education in Perplexity AI Avoidance
- Authors: Roberto Ulloa, Juhi Kulshrestha, Celina Kacperski,
- Abstract summary: This study investigates whether differences in formal education are associated with CAI avoidance.<n>Findings underscore education's central role in shaping AI adoption and the role of self-selection biases in AI-related research.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rise of conversational AI (CAI), powered by large language models, is transforming how individuals access and interact with digital information. However, these tools may inadvertently amplify existing digital inequalities. This study investigates whether differences in formal education are associated with CAI avoidance, leveraging behavioral data from an online experiment (N = 1,636). Participants were randomly assigned to a control or an information-seeking task, either a traditional online search or a CAI (Perplexity AI). Task avoidance (operationalized as survey abandonment or providing unrelated responses during task assignment) was significantly higher in the CAI group (51%) compared to the search (30.9%) and control (16.8%) groups, with the highest CAI avoidance among participants with lower education levels (~74.4%). Structural equation modeling based on the theoretical framework UTAUT2 and LASSO regressions reveal that education is strongly associated with CAI avoidance, even after accounting for various cognitive and affective predictors of technology adoption. These findings underscore education's central role in shaping AI adoption and the role of self-selection biases in AI-related research, stressing the need for inclusive design to ensure equitable access to emerging technologies.
Related papers
- ChatGPT produces more "lazy" thinkers: Evidence of cognitive engagement decline [0.0]
This study investigates the impact of generative artificial intelligence (AI) tools on the cognitive engagement of students during academic writing tasks.<n>The results revealed significantly lower cognitive engagement scores in the ChatGPT group compared to the control group.<n>These findings suggest that AI assistance may lead to cognitive offloading.
arXiv Detail & Related papers (2025-06-30T18:41:50Z) - Students' Reliance on AI in Higher Education: Identifying Contributing Factors [2.749898166276854]
This study investigates potential factors contributing to patterns of AI reliance among undergraduate students.<n>appropriate reliance is significantly related to students' programming self-efficacy, programming literacy, and need for cognition.<n>Overreliance showed significant correlations with post-task trust and satisfaction with the AI assistant.
arXiv Detail & Related papers (2025-06-16T17:55:26Z) - The AI Imperative: Scaling High-Quality Peer Review in Machine Learning [49.87236114682497]
We argue that AI-assisted peer review must become an urgent research and infrastructure priority.<n>We propose specific roles for AI in enhancing factual verification, guiding reviewer performance, assisting authors in quality improvement, and supporting ACs in decision-making.
arXiv Detail & Related papers (2025-06-09T18:37:14Z) - When Models Know More Than They Can Explain: Quantifying Knowledge Transfer in Human-AI Collaboration [79.69935257008467]
We introduce Knowledge Integration and Transfer Evaluation (KITE), a conceptual and experimental framework for Human-AI knowledge transfer capabilities.<n>We conduct the first large-scale human study (N=118) explicitly designed to measure it.<n>In our two-phase setup, humans first ideate with an AI on problem-solving strategies, then independently implement solutions, isolating model explanations' influence on human understanding.
arXiv Detail & Related papers (2025-06-05T20:48:16Z) - Evaluating the AI-Lab Intervention: Impact on Student Perception and Use of Generative AI in Early Undergraduate Computer Science Courses [0.0]
Generative AI (GenAI) is rapidly entering computer science education.<n>Concerns about overreliance coexist with a gap in research on structured scaffolding to guide tool use in formal courses.<n>This study examines the impact of a dedicated "AI-Lab" intervention on undergraduate students.
arXiv Detail & Related papers (2025-04-30T18:12:42Z) - Synergizing Self-Regulation and Artificial-Intelligence Literacy Towards Future Human-AI Integrative Learning [92.34299949916134]
Self-regulated learning (SRL) and Artificial-Intelligence (AI) literacy are becoming key competencies for successful human-AI interactive learning.<n>This study analyzed data from 1,704 Chinese undergraduates using clustering methods to uncover four learner groups.
arXiv Detail & Related papers (2025-03-31T13:41:21Z) - From G-Factor to A-Factor: Establishing a Psychometric Framework for AI Literacy [1.5031024722977635]
We establish AI literacy as a coherent, measurable construct with significant implications for education, workforce development, and social equity.<n>Study 1 revealed a dominant latent factor - termed the "A-factor" - that accounts for 44.16% of variance across diverse AI interaction tasks.<n>Study 2 refined the measurement tool by examining four key dimensions of AI literacy.<n>Regression analyses identified several significant predictors of AI literacy, including cognitive abilities (IQ), educational background, prior AI experience, and training history.
arXiv Detail & Related papers (2025-03-16T14:51:48Z) - Analyzing the Impact of AI Tools on Student Study Habits and Academic Performance [0.0]
The research focuses on how AI tools can support personalized learning, adaptive test adjustments, and provide real-time classroom analysis.<n>Student feedback revealed strong support for these features, and the study found a significant reduction in study hours alongside an increase in GPA.<n>Despite these benefits, challenges such as over-reliance on AI and difficulties in integrating AI with traditional teaching methods were also identified.
arXiv Detail & Related papers (2024-12-03T04:51:57Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Human-AI Collaboration via Conditional Delegation: A Case Study of
Content Moderation [47.102566259034326]
We propose conditional delegation as an alternative paradigm for human-AI collaboration.
We develop novel interfaces to assist humans in creating conditional delegation rules.
Our study demonstrates the promise of conditional delegation in improving model performance.
arXiv Detail & Related papers (2022-04-25T17:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.