"I'm Not Confident in Debiasing AI Systems Since I Know Too Little":
Teaching AI Creators About Gender Bias Through Hands-on Tutorials
- URL: http://arxiv.org/abs/2309.08121v1
- Date: Fri, 15 Sep 2023 03:09:36 GMT
- Title: "I'm Not Confident in Debiasing AI Systems Since I Know Too Little":
Teaching AI Creators About Gender Bias Through Hands-on Tutorials
- Authors: Kyrie Zhixuan Zhou, Jiaxun Cao, Xiaowen Yuan, Daniel E. Weissglass,
Zachary Kilhoffer, Madelyn Rose Sanfilippo, Xin Tong
- Abstract summary: School curricula fail to educate AI creators on this topic.
Gender bias is rampant in AI systems, causing bad user experience, injustices, and mental harm to women.
- Score: 11.823789408603908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gender bias is rampant in AI systems, causing bad user experience,
injustices, and mental harm to women. School curricula fail to educate AI
creators on this topic, leaving them unprepared to mitigate gender bias in AI.
In this paper, we designed hands-on tutorials to raise AI creators' awareness
of gender bias in AI and enhance their knowledge of sources of gender bias and
debiasing techniques. The tutorials were evaluated with 18 AI creators,
including AI researchers, AI industrial practitioners (i.e., developers and
product managers), and students who had learned AI. Their improved awareness
and knowledge demonstrated the effectiveness of our tutorials, which have the
potential to complement the insufficient AI gender bias education in CS/AI
courses. Based on the findings, we synthesize design implications and a rubric
to guide future research, education, and design efforts.
Related papers
- Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - Bootstrapping Developmental AIs: From Simple Competences to Intelligent
Human-Compatible AIs [0.0]
The mainstream AIs approaches are the generative and deep learning approaches with large language models (LLMs) and the manually constructed symbolic approach.
This position paper lays out the prospects, gaps, and challenges for extending the practice of developmental AIs to create resilient, intelligent, and human-compatible AIs.
arXiv Detail & Related papers (2023-08-08T21:14:21Z) - AI Audit: A Card Game to Reflect on Everyday AI Systems [21.75299649772085]
An essential element of K-12 AI literacy is educating learners about the ethical and societal implications of AI systems.
There is little work in using game-based learning methods in AI literacy.
We developed a competitive card game for middle and high school students called "AI Audit"
arXiv Detail & Related papers (2023-05-29T06:41:47Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Rebuilding Trust: Queer in AI Approach to Artificial Intelligence Risk
Management [0.0]
Trustworthy AI has become an important topic because trust in AI systems and their creators has been lost, or was never present in the first place.
We argue that any AI development, deployment, and monitoring framework that aspires to trust must incorporate both feminist, non-exploitative design principles.
arXiv Detail & Related papers (2021-09-21T21:22:58Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Bias: Friend or Foe? User Acceptance of Gender Stereotypes in Automated
Career Recommendations [8.44485053836748]
We show that a fair AI algorithm on its own may be insufficient to achieve its intended results in the real world.
Using career recommendation as a case study, we build a fair AI career recommender by employing gender debiasing machine learning techniques.
arXiv Detail & Related papers (2021-06-13T23:27:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.