Investigating Practices and Opportunities for Cross-functional
Collaboration around AI Fairness in Industry Practice
- URL: http://arxiv.org/abs/2306.06542v1
- Date: Sat, 10 Jun 2023 23:42:26 GMT
- Title: Investigating Practices and Opportunities for Cross-functional
Collaboration around AI Fairness in Industry Practice
- Authors: Wesley Hanwen Deng, Nur Yildirim, Monica Chang, Motahhare Eslami, Ken
Holstein, Michael Madaio
- Abstract summary: An emerging body of research indicates that ineffective cross-functional collaboration represents a major barrier to addressing issues of fairness in AI design and development.
We conducted a series of interviews and design workshops with 23 industry practitioners spanning various roles from 17 companies.
We found that practitioners engaged in bridging work to overcome frictions in understanding, contextualization, and evaluation around AI fairness across roles.
- Score: 10.979734542685447
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An emerging body of research indicates that ineffective cross-functional
collaboration -- the interdisciplinary work done by industry practitioners
across roles -- represents a major barrier to addressing issues of fairness in
AI design and development. In this research, we sought to better understand
practitioners' current practices and tactics to enact cross-functional
collaboration for AI fairness, in order to identify opportunities to support
more effective collaboration. We conducted a series of interviews and design
workshops with 23 industry practitioners spanning various roles from 17
companies. We found that practitioners engaged in bridging work to overcome
frictions in understanding, contextualization, and evaluation around AI
fairness across roles. In addition, in organizational contexts with a lack of
resources and incentives for fairness work, practitioners often piggybacked on
existing requirements (e.g., for privacy assessments) and AI development norms
(e.g., the use of quantitative evaluation metrics), although they worry that
these tactics may be fundamentally compromised. Finally, we draw attention to
the invisible labor that practitioners take on as part of this bridging and
piggybacking work to enact interdisciplinary collaboration for fairness. We
close by discussing opportunities for both FAccT researchers and AI
practitioners to better support cross-functional collaboration for fairness in
the design and development of AI systems.
Related papers
- "It Might be Technically Impressive, But It's Practically Useless to Us": Practices, Challenges, and Opportunities for Cross-Functional Collaboration around AI within the News Industry [7.568817736131254]
An increasing number of news organizations have integrated artificial intelligence (AI) into their operations.
This has initiated cross-functional collaborations between these professionals and journalists.
This study investigates the current practices, challenges, and opportunities for cross-functional collaboration around AI in today's news industry.
arXiv Detail & Related papers (2024-09-18T14:12:01Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - The Participatory Turn in AI Design: Theoretical Foundations and the
Current State of Practice [64.29355073494125]
This article aims to ground what we dub the "participatory turn" in AI design by synthesizing existing theoretical literature on participation.
We articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners.
arXiv Detail & Related papers (2023-10-02T05:30:42Z) - Stronger Together: on the Articulation of Ethical Charters, Legal Tools,
and Technical Documentation in ML [5.433040083728602]
The need for accountability of the people behind AI systems can be addressed by leveraging processes in three fields of study: ethics, law, and computer science.
We first contrast notions of compliance in the ethical, legal, and technical fields.
We then focus on the role of values in articulating the synergies between the fields.
arXiv Detail & Related papers (2023-05-09T15:35:31Z) - The Road to a Successful HRI: AI, Trust and ethicS-TRAITS [64.77385130665128]
The aim of this workshop is to foster the exchange of insights on past and ongoing research towards effective and long-lasting collaborations between humans and robots.
We particularly focus on AI techniques required to implement autonomous and proactive interactions.
arXiv Detail & Related papers (2022-06-07T11:12:45Z) - An Uncommon Task: Participatory Design in Legal AI [64.54460979588075]
We examine a notable yet understudied AI design process in the legal domain that took place over a decade ago.
We show how an interactive simulation methodology allowed computer scientists and lawyers to become co-designers.
arXiv Detail & Related papers (2022-03-08T15:46:52Z) - Assessing the Fairness of AI Systems: AI Practitioners' Processes,
Challenges, and Needs for Support [18.148737010217953]
We conduct interviews and workshops with AI practitioners to identify practitioners' processes, challenges, and needs for support.
We find that practitioners face challenges when choosing performance metrics, identifying the most relevant direct stakeholders and demographic groups.
We identify impacts on fairness work stemming from a lack of engagement with direct stakeholders, business imperatives that prioritize customers over marginalized groups, and the drive to deploy AI systems at scale.
arXiv Detail & Related papers (2021-12-10T17:14:34Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z) - Progressing Towards Responsible AI [2.191505742658975]
Observatory on Society and Artificial Intelligence (OSAI) grew out of the project AI4EU.
OSAI aims to stimulate reflection on a broad spectrum of issues of AI (ethical, legal, social, economic and cultural)
arXiv Detail & Related papers (2020-08-11T09:46:00Z) - Where Responsible AI meets Reality: Practitioner Perspectives on
Enablers for shifting Organizational Practices [3.119859292303396]
This paper examines and seeks to offer a framework for analyzing how organizational culture and structure impact the effectiveness of responsible AI initiatives in practice.
We present the results of semi-structured qualitative interviews with practitioners working in industry, investigating common challenges, ethical tensions, and effective enablers for responsible AI initiatives.
arXiv Detail & Related papers (2020-06-22T15:57:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.