Barriers to AI Adoption: Image Concerns at Work
- URL: http://arxiv.org/abs/2511.18582v1
- Date: Sun, 23 Nov 2025 18:50:34 GMT
- Title: Barriers to AI Adoption: Image Concerns at Work
- Authors: David Almog,
- Abstract summary: I find that workers adopt AI recommendations at lower rates when their reliance on AI is visible to the evaluator.<n>I introduce a novel incentive-compatible elicitation method showing that workers fear heavy reliance on AI signals a lack of confidence in their own judgment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concerns about how workers are perceived can deter effective collaboration with artificial intelligence (AI). In a field experiment on a large online labor market, I hired 450 U.S.-based remote workers to complete an image-categorization job assisted by AI recommendations. Workers were incentivized by the prospect of a contract extension based on an HR evaluator's feedback. I find that workers adopt AI recommendations at lower rates when their reliance on AI is visible to the evaluator, resulting in a measurable decline in task performance. The effects are present despite a conservative design in which workers know that the evaluator is explicitly instructed to assess expected accuracy on the same AI-assisted task. This reduction in AI reliance persists even when the evaluator is reassured about workers' strong performance history on the platform, underscoring how difficult these concerns are to alleviate. Leveraging the platform's public feedback feature, I introduce a novel incentive-compatible elicitation method showing that workers fear heavy reliance on AI signals a lack of confidence in their own judgment, a trait they view as essential when collaborating with AI.
Related papers
- AI Skills Improve Job Prospects: Causal Evidence from a Hiring Experiment [0.15293427903448023]
This study examines whether AI skills serve as a positive hiring signal and whether they can offset conventional disadvantages such as older age or lower formal education.<n>We conduct an experimental survey with 1,700 recruiters from the United Kingdom and the United States.<n>Across three occupations, AI skills significantly increase interview invitation probabilities by approximately 8 to 15 percentage points.
arXiv Detail & Related papers (2026-01-19T18:37:28Z) - AI Failure Loops in Devalued Work: The Confluence of Overconfidence in AI and Underconfidence in Worker Expertise [22.485163311963493]
We examine the case of feminized labor: a class of devalued occupations historically misnomered as women's work''<n>We show how misjudgments on the automatability of workers' skills can lead to AI deployments that fail to bring value to workers.
arXiv Detail & Related papers (2025-11-07T01:51:57Z) - Experimental Evidence That AI-Managed Workers Tolerate Lower Pay Without Demotivation [7.306174397662034]
Experimental evidence on worker responses to AI management remains mixed, partly due to limitations in experimental fidelity.<n>We address these limitations with a customized workplace in the Minecraft platform, enabling high-resolution behavioral tracking of autonomous task execution.<n>Workers completed repeated production tasks under either human, AI, or hybrid management.<n>An AI manager trained on human-defined evaluation principles systematically assigned lower performance ratings and reduced wages by 40%, without adverse effects on worker motivation and sense of fairness.
arXiv Detail & Related papers (2025-05-27T20:40:18Z) - General Scales Unlock AI Evaluation with Explanatory and Predictive Power [57.7995945974989]
benchmarking has guided progress in AI, but it has offered limited explanatory and predictive power for general-purpose AI systems.<n>We introduce general scales for AI evaluation that can explain what common AI benchmarks really measure.<n>Our fully-automated methodology builds on 18 newly-crafted rubrics that place instance demands on general scales that do not saturate.
arXiv Detail & Related papers (2025-03-09T01:13:56Z) - Employee Well-being in the Age of AI: Perceptions, Concerns, Behaviors, and Outcomes [0.0]
The study examines how AI shapes employee perceptions, job satisfaction, mental health, and retention.<n> Transparency in AI systems emerges as a critical factor in fostering trust and positive employee attitudes.<n>The research introduces an AI-employee well-being Interaction Framework.
arXiv Detail & Related papers (2024-12-06T06:07:44Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Towards the Terminator Economy: Assessing Job Exposure to AI through LLMs [10.844598404826355]
One-third of U.S. employment is highly exposed to AI, primarily in high-skill jobs requiring a graduate or postgraduate level of education.<n>Even in high-skill occupations, AI exhibits high variability in task substitution, suggesting that AI and humans complement each other within the same occupation.<n>All results, models, and code are freely available online to allow the community to reproduce our results, compare outcomes, and use our work as a benchmark to monitor AI's progress over time.
arXiv Detail & Related papers (2024-07-27T08:14:18Z) - AI and Jobs: Has the Inflection Point Arrived? Evidence from an Online Labor Platform [0.13124513975412255]
We investigate how AI influences freelancers across different online labor markets (OLMs)
To shed light on the underlying mechanisms, we developed a Cournot-type competition model.
We find that U.S. web developers tend to benefit more from the release of ChatGPT compared to their counterparts in other regions.
arXiv Detail & Related papers (2023-12-07T10:06:34Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.