Large Language Model Use Impact Locus of Control
- URL: http://arxiv.org/abs/2505.11406v1
- Date: Fri, 16 May 2025 16:16:32 GMT
- Title: Large Language Model Use Impact Locus of Control
- Authors: Jenny Xiyu Fu, Brennan Antone, Kowe Kadoma, Malte Jung,
- Abstract summary: This paper explores the psychological impact of co-writing with AI on people's locus of control.<n>We found that employment status plays a critical role in shaping users' reliance on AI and their locus of control.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As AI tools increasingly shape how we write, they may also quietly reshape how we perceive ourselves. This paper explores the psychological impact of co-writing with AI on people's locus of control. Through an empirical study with 462 participants, we found that employment status plays a critical role in shaping users' reliance on AI and their locus of control. Current results demonstrated that employed participants displayed higher reliance on AI and a shift toward internal control, while unemployed users tended to experience a reduction in personal agency. Through quantitative results and qualitative observations, this study opens a broader conversation about AI's role in shaping personal agency and identity.
Related papers
- Classifying Epistemic Relationships in Human-AI Interaction: An Exploratory Approach [0.6906005491572401]
This study examines how users form relationships with AI-how they assess, trust, and collaborate with it in research and teaching contexts.<n>Based on 31 interviews with academics across disciplines, we developed a five-part codebook and identified five relationship types.
arXiv Detail & Related papers (2025-08-02T23:41:28Z) - The AI Imperative: Scaling High-Quality Peer Review in Machine Learning [49.87236114682497]
We argue that AI-assisted peer review must become an urgent research and infrastructure priority.<n>We propose specific roles for AI in enhancing factual verification, guiding reviewer performance, assisting authors in quality improvement, and supporting ACs in decision-making.
arXiv Detail & Related papers (2025-06-09T18:37:14Z) - Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks [45.23431596135002]
This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task.
Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved.
arXiv Detail & Related papers (2024-11-15T13:22:04Z) - How Performance Pressure Influences AI-Assisted Decision Making [57.53469908423318]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - AI-Tutoring in Software Engineering Education [0.7631288333466648]
We conducted an exploratory case study by integrating the GPT-3.5-Turbo model as an AI-Tutor within the APAS Artemis.
The findings highlight advantages, such as timely feedback and scalability.
However, challenges like generic responses and students' concerns about a learning progress inhibition when using the AI-Tutor were also evident.
arXiv Detail & Related papers (2024-04-03T08:15:08Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - Analyzing Character and Consciousness in AI-Generated Social Content: A
Case Study of Chirper, the AI Social Network [0.0]
The study embarks on a comprehensive exploration of AI behavior, analyzing the effects of diverse settings on Chirper's responses.
Through a series of cognitive tests, the study gauges the self-awareness and pattern recognition prowess of Chirpers.
An intriguing aspect of the research is the exploration of the potential influence of a Chirper's handle or personality type on its performance.
arXiv Detail & Related papers (2023-08-30T15:40:18Z) - The Future of AI-Assisted Writing [0.0]
We conduct a comparative user-study between such tools from an information retrieval lens: pull and push.
Our findings show that users welcome seamless assistance of AI in their writing.
Users also enjoyed the collaboration with AI-assisted writing tools and did not feel a lack of ownership.
arXiv Detail & Related papers (2023-06-29T02:46:45Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - The AI Ghostwriter Effect: When Users Do Not Perceive Ownership of
AI-Generated Text But Self-Declare as Authors [42.72188284211033]
We investigate authorship and ownership in human-AI collaboration for personalized language generation.
We show an AI Ghostwriter Effect: Users do not consider themselves the owners and authors of AI-generated text.
We discuss how our findings relate to psychological ownership and human-AI interaction to lay the foundations for adapting authorship frameworks.
arXiv Detail & Related papers (2023-03-06T16:53:12Z) - Knowing About Knowing: An Illusion of Human Competence Can Hinder
Appropriate Reliance on AI Systems [13.484359389266864]
This paper addresses whether the Dunning-Kruger Effect (DKE) can hinder appropriate reliance on AI systems.
DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance.
We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems.
arXiv Detail & Related papers (2023-01-25T14:26:10Z) - Understanding the Role of Human Intuition on Reliance in Human-AI
Decision-Making with Explanations [44.01143305912054]
We study how decision-makers' intuition affects their use of AI predictions and explanations.
Our results identify three types of intuition involved in reasoning about AI predictions and explanations.
We use these pathways to explain why feature-based explanations did not improve participants' decision outcomes and increased their overreliance on AI.
arXiv Detail & Related papers (2023-01-18T01:33:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.