AI Failure Loops in Devalued Work: The Confluence of Overconfidence in AI and Underconfidence in Worker Expertise
- URL: http://arxiv.org/abs/2511.04922v1
- Date: Fri, 07 Nov 2025 01:51:57 GMT
- Title: AI Failure Loops in Devalued Work: The Confluence of Overconfidence in AI and Underconfidence in Worker Expertise
- Authors: Anna Kawakami, Jordan Taylor, Sarah Fox, Haiyi Zhu, Kenneth Holstein,
- Abstract summary: We examine the case of feminized labor: a class of devalued occupations historically misnomered as women's work''<n>We show how misjudgments on the automatability of workers' skills can lead to AI deployments that fail to bring value to workers.
- Score: 22.485163311963493
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A growing body of literature has focused on understanding and addressing workplace AI design failures. However, past work has largely overlooked the role of the devaluation of worker expertise in shaping the dynamics of AI development and deployment. In this paper, we examine the case of feminized labor: a class of devalued occupations historically misnomered as ``women's work,'' such as social work, K-12 teaching, and home healthcare. Drawing on literature on AI deployments in feminized labor contexts, we conceptualize AI Failure Loops: a set of interwoven, socio-technical failure modes that help explain how the systemic devaluation of workers' expertise negatively impacts, and is impacted by, AI design, evaluation, and governance practices. These failures demonstrate how misjudgments on the automatability of workers' skills can lead to AI deployments that fail to bring value to workers and, instead, further diminish the visibility of workers' expertise. We discuss research and design implications for workplace AI, especially for devalued occupations.
Related papers
- Barriers to AI Adoption: Image Concerns at Work [0.0]
I find that workers adopt AI recommendations at lower rates when their reliance on AI is visible to the evaluator.<n>I introduce a novel incentive-compatible elicitation method showing that workers fear heavy reliance on AI signals a lack of confidence in their own judgment.
arXiv Detail & Related papers (2025-11-23T18:50:34Z) - Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce [45.348336032930845]
We introduce a novel framework to assess which occupational tasks workers want AI agents to automate or augment.<n>Our framework features an audio-enhanced mini-interview to capture nuanced worker desires.<n>We construct the WORKBank database to capture preferences from 1,500 domain workers and capability assessments from AI experts.
arXiv Detail & Related papers (2025-06-06T23:05:52Z) - Assessing employment and labour issues implicated by using AI [0.0]
The chapter critiques the dominant reductionist approach in AI and work studies.<n>It advocates for a systemic perspective that emphasizes the interdependence of tasks, roles, and workplace contexts.
arXiv Detail & Related papers (2025-04-08T10:14:19Z) - De-skilling, Cognitive Offloading, and Misplaced Responsibilities: Potential Ironies of AI-Assisted Design [3.6284577335311563]
We analyzed over 120 articles and discussions from UX-focused subreddits.<n>Our findings indicate that practitioners express optimism about AI reducing repetitive work and augmenting creativity.<n>We argue that UX professionals should critically evaluate AI's role beyond immediate productivity gains.
arXiv Detail & Related papers (2025-03-05T21:47:16Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Towards the Terminator Economy: Assessing Job Exposure to AI through LLMs [10.844598404826355]
One-third of U.S. employment is highly exposed to AI, primarily in high-skill jobs requiring a graduate or postgraduate level of education.<n>Even in high-skill occupations, AI exhibits high variability in task substitution, suggesting that AI and humans complement each other within the same occupation.<n>All results, models, and code are freely available online to allow the community to reproduce our results, compare outcomes, and use our work as a benchmark to monitor AI's progress over time.
arXiv Detail & Related papers (2024-07-27T08:14:18Z) - The Impact of AI on Perceived Job Decency and Meaningfulness: A Case Study [3.9134031118910264]
This paper explores the impact of AI on job decency and meaningfulness in workplaces.
Findings reveal that respondents visualize a workplace where humans continue to play a dominant role, even with the introduction of advanced AIs.
respondents believe that the introduction of AI will maintain or potentially increase overall job satisfaction.
arXiv Detail & Related papers (2024-06-20T12:52:57Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - The Potential Impact of AI Innovations on U.S. Occupations [3.0829845709781725]
We employ Deep Learning Natural Language Processing to identify AI patents that may impact various occupational tasks at scale.
Our methodology relies on a comprehensive dataset of 17,879 task descriptions and quantifies AI's potential impact.
Our results reveal that some occupations will potentially be impacted, and that impact is intricately linked to specific skills.
arXiv Detail & Related papers (2023-12-07T21:44:07Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.