Safety First: Psychological Safety as the Key to AI Transformation
- URL: http://arxiv.org/abs/2602.23279v1
- Date: Thu, 26 Feb 2026 17:53:37 GMT
- Title: Safety First: Psychological Safety as the Key to AI Transformation
- Authors: Aaron Reich, Diana Wolfe, Matt Price, Alice Choe, Fergus Kidd, Hannah Wagner,
- Abstract summary: This study examines whether psychological safety is associated with AI adoption and usage in the workplace.<n> Logistic and linear regression analyses show that psychological safety reliably predicts whether employees adopt AI tools.<n>The study underscores the need to distinguish between adoption and sustained use.
- Score: 0.36944296923226316
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Organizations continue to invest in artificial intelligence, yet many struggle to ensure that employees adopt and engage with these tools. Drawing on research highlighting the interpersonal and learning demands of technology use, this study examines whether psychological safety is associated with AI adoption and usage in the workplace. Using survey data from 2,257 employees in a global consulting firm, we test whether psychological safety is associated with adoption, usage frequency, and usage duration; and whether these relationships vary by organizational level, professional experience, or geographic region. Logistic and linear regression analyses show that psychological safety reliably predicts whether employees adopt AI tools but does not predict how often or how long they use AI once adoption has occurred. Moreover, the relationship between psychological safety and AI adoption is consistent across experience levels, role levels, and regions, and no moderation effects emerge. These findings suggest that psychological safety functions as a key antecedent of initial AI engagement but not of subsequent usage intensity. The study underscores the need to distinguish between adoption and sustained use and highlights opportunities for targeted organizational interventions in early-stage AI implementation.
Related papers
- Work Design and Multidimensional AI Threat as Predictors of Workplace AI Adoption and Depth of Use [0.36944296923226316]
This research examines whether motivational job characteristics and multidimensional AI threat perceptions jointly predict workplace AI adoption and depth of use.<n>Using cross-sectional survey data from 2,257 employees, we tested group differences across role level, years of experience, and region, along with multivariable predictors of AI adoption and use depth.
arXiv Detail & Related papers (2026-02-26T17:52:29Z) - Understanding Risk and Dependency in AI Chatbot Use from User Discourse [4.1957094635667875]
We present a large-scale computational thematic analysis of posts collected between 2023 and 2025 from two communities, r/AIDangers and r/ChatbotAddiction.<n>We identify 14 recurring thematic categories and synthesize them into five higher-order experiential dimensions.<n>Our findings reveal five empirically derived experiential dimensions of AI-related psychological risk grounded in real-world user discourse.
arXiv Detail & Related papers (2026-02-10T02:16:57Z) - Revisiting UTAUT for the Age of AI: Understanding Employees AI Adoption and Usage Patterns Through an Extended UTAUT Framework [0.0]
This study investigates whether demographic factors shape adoption and attitudes among employees toward artificial intelligence (AI) technologies at work.<n>We surveyed 2,257 professionals across global regions and organizational levels within a multinational consulting firm.
arXiv Detail & Related papers (2025-10-16T21:01:41Z) - Is It Safe To Learn And Share? On Psychological Safety and Social Learning in (Agile) Communities of Practice [4.662447283600293]
Communities of Practice (CoPs) are increasingly adopted to support social learning beyond formal education.<n> psychological safety, a prerequisite for effective learning, remains insufficiently understood in these settings.<n>This study investigates psychological safety within Agile CoPs through survey data from 143 participants.
arXiv Detail & Related papers (2025-06-30T19:06:43Z) - Report on NSF Workshop on Science of Safe AI [75.96202715567088]
New advances in machine learning are leading to new opportunities to develop technology-based solutions to societal problems.<n>To fulfill the promise of AI, we must address how to develop AI-based systems that are accurate and performant but also safe and trustworthy.<n>This report is the result of the discussions in the working groups that addressed different aspects of safety at the workshop.
arXiv Detail & Related papers (2025-06-24T18:55:29Z) - AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement [73.0700818105842]
We introduce AISafetyLab, a unified framework and toolkit that integrates representative attack, defense, and evaluation methodologies for AI safety.<n> AISafetyLab features an intuitive interface that enables developers to seamlessly apply various techniques.<n>We conduct empirical studies on Vicuna, analyzing different attack and defense strategies to provide valuable insights into their comparative effectiveness.
arXiv Detail & Related papers (2025-02-24T02:11:52Z) - Human Decision-making is Susceptible to AI-driven Manipulation [87.24007555151452]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.<n>This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - A risk model and analysis method for the psychological safety of human and autonomous vehicles interaction [4.627842277006583]
The paper introduces a definition of psychological safety in AVs context.<n>It proposes a risk model for identifying and assessing AVs psychological hazards and risks.<n>The paper illustrates the application of the framework for assessing potential psychological hazards using a scenario involving a family's experience with an autonomous vehicle.
arXiv Detail & Related papers (2024-11-08T17:44:40Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.<n>We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety [70.84902425123406]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.