Assessing AI Impact Assessments: A Classroom Study
- URL: http://arxiv.org/abs/2311.11193v1
- Date: Sun, 19 Nov 2023 01:00:59 GMT
- Title: Assessing AI Impact Assessments: A Classroom Study
- Authors: Nari Johnson, Hoda Heidari
- Abstract summary: Artificial Intelligence Impact Assessments ("AIIAs"), a family of tools that provide structured processes to imagine the possible impacts of a proposed AI system, have become an increasingly popular proposal to govern AI systems.
Recent efforts from government or private-sector organizations have proposed many diverse instantiations of AIIAs, which take a variety of forms ranging from open-ended questionnaires to graded score-cards.
We conduct a classroom study at a large research-intensive university (R1) in an elective course focused on the societal and ethical implications of AI.
We find preliminary evidence that impact assessments can influence participants' perceptions of the potential
- Score: 14.768235460961876
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence Impact Assessments ("AIIAs"), a family of tools that
provide structured processes to imagine the possible impacts of a proposed AI
system, have become an increasingly popular proposal to govern AI systems.
Recent efforts from government or private-sector organizations have proposed
many diverse instantiations of AIIAs, which take a variety of forms ranging
from open-ended questionnaires to graded score-cards. However, to date that has
been limited evaluation of existing AIIA instruments. We conduct a classroom
study (N = 38) at a large research-intensive university (R1) in an elective
course focused on the societal and ethical implications of AI. We assign
students to different organizational roles (for example, an ML scientist or
product manager) and ask participant teams to complete one of three existing AI
impact assessments for one of two imagined generative AI systems. In our
thematic analysis of participants' responses to pre- and post-activity
questionnaires, we find preliminary evidence that impact assessments can
influence participants' perceptions of the potential risks of generative AI
systems, and the level of responsibility held by AI experts in addressing
potential harm. We also discover a consistent set of limitations shared by
several existing AIIA instruments, which we group into concerns about their
format and content, as well as the feasibility and effectiveness of the
activity in foreseeing and mitigating potential harms. Drawing on the findings
of this study, we provide recommendations for future work on developing and
validating AIIAs.
Related papers
- Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
Particip-AI is a framework to gather current and future AI use cases and their harms and benefits from non-expert public.
We gather responses from 295 demographically diverse participants.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Anticipating Impacts: Using Large-Scale Scenario Writing to Explore
Diverse Implications of Generative AI in the News Environment [3.660182910533372]
We aim to broaden the perspective and capture the expectations of three stakeholder groups about the potential negative impacts of generative AI.
We apply scenario writing and use participatory foresight to delve into cognitively diverse imaginations of the future.
We conclude by discussing the usefulness of scenario-writing and participatory foresight as a toolbox for generative AI impact assessment.
arXiv Detail & Related papers (2023-10-10T06:59:27Z) - The AI Incident Database as an Educational Tool to Raise Awareness of AI
Harms: A Classroom Exploration of Efficacy, Limitations, & Future
Improvements [14.393183391019292]
The AI Incident Database (AIID) is one of the few attempts at offering a relatively comprehensive database indexing prior instances of harms or near harms stemming from the deployment of AI technologies in the real world.
This study assesses the effectiveness of AIID as an educational tool to raise awareness regarding the prevalence and severity of AI harms in socially high-stakes domains.
arXiv Detail & Related papers (2023-10-10T02:55:09Z) - Predictable Artificial Intelligence [67.79118050651908]
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
This paper aims to elucidate the questions, hypotheses and challenges relevant to Predictable AI.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - Analyzing Character and Consciousness in AI-Generated Social Content: A
Case Study of Chirper, the AI Social Network [0.0]
The study embarks on a comprehensive exploration of AI behavior, analyzing the effects of diverse settings on Chirper's responses.
Through a series of cognitive tests, the study gauges the self-awareness and pattern recognition prowess of Chirpers.
An intriguing aspect of the research is the exploration of the potential influence of a Chirper's handle or personality type on its performance.
arXiv Detail & Related papers (2023-08-30T15:40:18Z) - Knowing About Knowing: An Illusion of Human Competence Can Hinder
Appropriate Reliance on AI Systems [13.484359389266864]
This paper addresses whether the Dunning-Kruger Effect (DKE) can hinder appropriate reliance on AI systems.
DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance.
We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems.
arXiv Detail & Related papers (2023-01-25T14:26:10Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Progressing Towards Responsible AI [2.191505742658975]
Observatory on Society and Artificial Intelligence (OSAI) grew out of the project AI4EU.
OSAI aims to stimulate reflection on a broad spectrum of issues of AI (ethical, legal, social, economic and cultural)
arXiv Detail & Related papers (2020-08-11T09:46:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.