Using Case Studies to Teach Responsible AI to Industry Practitioners
- URL: http://arxiv.org/abs/2407.14686v2
- Date: Tue, 23 Jul 2024 23:59:20 GMT
- Title: Using Case Studies to Teach Responsible AI to Industry Practitioners
- Authors: Julia Stoyanovich, Rodrigo Kreis de Paula, Armanda Lewis, Chloe Zheng,
- Abstract summary: We propose a novel stakeholder-first educational approach that uses interactive case studies to achieve organizational and practitioner -level engagement and advance learning of Responsible AI (RAI)
Our assessment results indicate that participants found the workshops engaging and reported a positive shift in understanding and motivation to apply RAI to their work.
- Score: 8.152080071643685
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Responsible AI (RAI) is the science and the practice of making the design, development, and use of AI socially sustainable: of reaping the benefits of innovation while controlling the risks. Naturally, industry practitioners play a decisive role in our collective ability to achieve the goals of RAI. Unfortunately, we do not yet have consolidated educational materials and effective methodologies for teaching RAI to practitioners. In this paper, we propose a novel stakeholder-first educational approach that uses interactive case studies to achieve organizational and practitioner -level engagement and advance learning of RAI. We discuss a partnership with Meta, an international technology company, to co-develop and deliver RAI workshops to a diverse audience within the company. Our assessment results indicate that participants found the workshops engaging and reported a positive shift in understanding and motivation to apply RAI to their work.
Related papers
- Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
Particip-AI is a framework to gather current and future AI use cases and their harms and benefits from non-expert public.
We gather responses from 295 demographically diverse participants.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - The Participatory Turn in AI Design: Theoretical Foundations and the
Current State of Practice [64.29355073494125]
This article aims to ground what we dub the "participatory turn" in AI design by synthesizing existing theoretical literature on participation.
We articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners.
arXiv Detail & Related papers (2023-10-02T05:30:42Z) - `It is currently hodgepodge'': Examining AI/ML Practitioners' Challenges
during Co-production of Responsible AI Values [4.091593765662773]
We interviewed 23 individuals, across 10 organizations, tasked to ship AI/ML based products while upholding RAI norms.
Top-down and bottom-up institutional structures create burden for different roles preventing them from upholding RAI values.
We recommend recommendations for inclusive and equitable RAI value-practices.
arXiv Detail & Related papers (2023-07-14T21:57:46Z) - Investigating Practices and Opportunities for Cross-functional
Collaboration around AI Fairness in Industry Practice [10.979734542685447]
An emerging body of research indicates that ineffective cross-functional collaboration represents a major barrier to addressing issues of fairness in AI design and development.
We conducted a series of interviews and design workshops with 23 industry practitioners spanning various roles from 17 companies.
We found that practitioners engaged in bridging work to overcome frictions in understanding, contextualization, and evaluation around AI fairness across roles.
arXiv Detail & Related papers (2023-06-10T23:42:26Z) - The Equitable AI Research Roundtable (EARR): Towards Community-Based
Decision Making in Responsible AI Development [4.1986677342209004]
The paper reports on our initial evaluation of The Equitable AI Research Roundtable.
EARR was created in collaboration among a large tech firm, nonprofits, NGO research institutions, and universities.
We outline three principles in practice of how EARR has operated thus far that are especially relevant to the concerns of the FAccT community.
arXiv Detail & Related papers (2023-03-14T18:57:20Z) - Learning Action-Effect Dynamics for Hypothetical Vision-Language
Reasoning Task [50.72283841720014]
We propose a novel learning strategy that can improve reasoning about the effects of actions.
We demonstrate the effectiveness of our proposed approach and discuss its advantages over previous baselines in terms of performance, data efficiency, and generalization capability.
arXiv Detail & Related papers (2022-12-07T05:41:58Z) - An Uncommon Task: Participatory Design in Legal AI [64.54460979588075]
We examine a notable yet understudied AI design process in the legal domain that took place over a decade ago.
We show how an interactive simulation methodology allowed computer scientists and lawyers to become co-designers.
arXiv Detail & Related papers (2022-03-08T15:46:52Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z) - Rebuilding Trust in Active Learning with Actionable Metrics [77.99796068970569]
Active Learning (AL) is an active domain of research, but is seldom used in the industry despite the pressing needs.
This is in part due to a misalignment of objectives, while research strives at getting the best results on selected datasets.
We present various actionable metrics to help rebuild trust of industrial practitioners in Active Learning.
arXiv Detail & Related papers (2020-12-18T09:34:59Z) - Where Responsible AI meets Reality: Practitioner Perspectives on
Enablers for shifting Organizational Practices [3.119859292303396]
This paper examines and seeks to offer a framework for analyzing how organizational culture and structure impact the effectiveness of responsible AI initiatives in practice.
We present the results of semi-structured qualitative interviews with practitioners working in industry, investigating common challenges, ethical tensions, and effective enablers for responsible AI initiatives.
arXiv Detail & Related papers (2020-06-22T15:57:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.