Enduring Disparities in the Workplace: A Pilot Study in the AI Community
- URL: http://arxiv.org/abs/2506.04305v2
- Date: Fri, 06 Jun 2025 11:01:19 GMT
- Title: Enduring Disparities in the Workplace: A Pilot Study in the AI Community
- Authors: Yunusa Simpa Abdulsalam, Siobhan Mackenzie Hall, Ana Quintero-Ossa, William Agnew, Carla Muntean, Sarah Tan, Ashley Heady, Savannah Thais, Jessica Schrouff,
- Abstract summary: We conducted a pilot survey of 1260 AI/ML professionals both in industry and academia across different axes.<n>Results indicate enduring disparities in workplace experiences for underrepresented and/or marginalized subgroups.<n>We highlight that accessibility remains an important challenge for a positive work environment and that disabled employees have a worse workplace experience than their non-disabled colleagues.
- Score: 3.4307685264019256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In efforts toward achieving responsible artificial intelligence (AI), fostering a culture of workplace transparency, diversity, and inclusion can breed innovation, trust, and employee contentment. In AI and Machine Learning (ML), such environments correlate with higher standards of responsible development. Without transparency, disparities, microaggressions and misconduct will remain unaddressed, undermining the very structural inequities responsible AI aims to mitigate. While prior work investigates workplace transparency and disparities in broad domains (e.g. science and technology, law) for specific demographic subgroups, it lacks in-depth and intersectional conclusions and a focus on the AI/ML community. To address this, we conducted a pilot survey of 1260 AI/ML professionals both in industry and academia across different axes, probing aspects such as belonging, performance, workplace Diversity, Equity and Inclusion (DEI) initiatives, accessibility, performance and compensation, microaggressions, misconduct, growth, and well-being. Results indicate enduring disparities in workplace experiences for underrepresented and/or marginalized subgroups. In particular, we highlight that accessibility remains an important challenge for a positive work environment and that disabled employees have a worse workplace experience than their non-disabled colleagues. We further surface disparities for intersectional groups and discuss how the implementation of DEI initiatives may differ from their perceived impact on the workplace. This study is a first step towards increasing transparency and informing AI/ML practitioners and organizations with empirical results. We aim to foster equitable decision-making in the design and evaluation of organizational policies and provide data that may empower professionals to make more informed choices of prospective workplaces.
Related papers
- Diversity and Inclusion in AI: Insights from a Survey of AI/ML Practitioners [4.761639988815896]
Growing awareness of social biases and inequalities embedded in Artificial Intelligence (AI) systems has brought increased attention to the integration of Diversity and Inclusion (D&I) principles throughout the AI lifecycle.<n>Despite the rise of ethical AI guidelines, there is limited empirical evidence on how D&I is applied in real-world settings.<n>This study explores how AI and Machine Learning(ML) practitioners perceive and implement D&I principles and identifies organisational challenges that hinder their effective adoption.
arXiv Detail & Related papers (2025-05-24T05:40:23Z) - Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice [57.94036023167952]
We argue that the efforts aiming to study AI's ethical ramifications should be made in tandem with those evaluating its impacts on the environment.<n>We propose best practices to better integrate AI ethics and sustainability in AI research and practice.
arXiv Detail & Related papers (2025-04-01T13:53:11Z) - An Overview of Large Language Models for Statisticians [109.38601458831545]
Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence (AI)<n>This paper explores potential areas where statisticians can make important contributions to the development of LLMs.<n>We focus on issues such as uncertainty quantification, interpretability, fairness, privacy, watermarking and model adaptation.
arXiv Detail & Related papers (2025-02-25T03:40:36Z) - Employee Well-being in the Age of AI: Perceptions, Concerns, Behaviors, and Outcomes [0.0]
The study examines how AI shapes employee perceptions, job satisfaction, mental health, and retention.<n> Transparency in AI systems emerges as a critical factor in fostering trust and positive employee attitudes.<n>The research introduces an AI-employee well-being Interaction Framework.
arXiv Detail & Related papers (2024-12-06T06:07:44Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - The Impossibility of Fair LLMs [17.812295963158714]
We analyze a variety of technical fairness frameworks and find inherent challenges in each that make the development of a fair language model intractable.<n>We show that each framework either does not extend to the general-purpose AI context or is infeasible in practice.<n>These inherent challenges would persist for general-purpose AI, including LLMs, even if empirical challenges, such as limited participatory input and limited measurement methods, were overcome.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Assessing the Fairness of AI Systems: AI Practitioners' Processes,
Challenges, and Needs for Support [18.148737010217953]
We conduct interviews and workshops with AI practitioners to identify practitioners' processes, challenges, and needs for support.
We find that practitioners face challenges when choosing performance metrics, identifying the most relevant direct stakeholders and demographic groups.
We identify impacts on fairness work stemming from a lack of engagement with direct stakeholders, business imperatives that prioritize customers over marginalized groups, and the drive to deploy AI systems at scale.
arXiv Detail & Related papers (2021-12-10T17:14:34Z) - Artificial Intelligence for IT Operations (AIOPS) Workshop White Paper [50.25428141435537]
Artificial Intelligence for IT Operations (AIOps) is an emerging interdisciplinary field arising in the intersection between machine learning, big data, streaming analytics, and the management of IT operations.
Main aim of the AIOPS workshop is to bring together researchers from both academia and industry to present their experiences, results, and work in progress in this field.
arXiv Detail & Related papers (2021-01-15T10:43:10Z) - No computation without representation: Avoiding data and algorithm
biases through diversity [11.12971845021808]
We draw connections between the lack of diversity within academic and professional computing fields and the type and breadth of the biases encountered in datasets.
We use these lessons to develop recommendations that provide concrete steps for the computing community to increase diversity.
arXiv Detail & Related papers (2020-02-26T23:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.