Learning AI Auditing: A Case Study of Teenagers Auditing a Generative AI Model
- URL: http://arxiv.org/abs/2508.04902v1
- Date: Wed, 06 Aug 2025 21:57:25 GMT
- Title: Learning AI Auditing: A Case Study of Teenagers Auditing a Generative AI Model
- Authors: Luis Morales-Navarro, Michelle Gan, Evelyn Yu, Lauren Vogelstein, Yasmin B. Kafai, DanaƩ Metaxa,
- Abstract summary: We conducted a two-week participatory design workshop with 14 teenagers (ages 14-15)<n>They audited the generative AI model behind TikTok's Effect House, a tool for creating interactive TikTok filters.<n>Our findings show that participants were engaged and creative throughout the activities, independently raising and exploring new considerations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This study investigates how high school-aged youth engage in algorithm auditing to identify and understand biases in artificial intelligence and machine learning (AI/ML) tools they encounter daily. With AI/ML technologies being increasingly integrated into young people's lives, there is an urgent need to equip teenagers with AI literacies that build both technical knowledge and awareness of social impacts. Algorithm audits (also called AI audits) have traditionally been employed by experts to assess potential harmful biases, but recent research suggests that non-expert users can also participate productively in auditing. We conducted a two-week participatory design workshop with 14 teenagers (ages 14-15), where they audited the generative AI model behind TikTok's Effect House, a tool for creating interactive TikTok filters. We present a case study describing how teenagers approached the audit, from deciding what to audit to analyzing data using diverse strategies and communicating their results. Our findings show that participants were engaged and creative throughout the activities, independently raising and exploring new considerations, such as age-related biases, that are uncommon in professional audits. We drew on our expertise in algorithm auditing to triangulate their findings as a way to examine if the workshop supported participants to reach coherent conclusions in their audit. Although the resulting number of changes in race, gender, and age representation uncovered by the teens were slightly different from ours, we reached similar conclusions. This study highlights the potential for auditing to inspire learning activities to foster AI literacies, empower teenagers to critically examine AI systems, and contribute fresh perspectives to the study of algorithmic harms.
Related papers
- ChatGPT produces more "lazy" thinkers: Evidence of cognitive engagement decline [0.0]
This study investigates the impact of generative artificial intelligence (AI) tools on the cognitive engagement of students during academic writing tasks.<n>The results revealed significantly lower cognitive engagement scores in the ChatGPT group compared to the control group.<n>These findings suggest that AI assistance may lead to cognitive offloading.
arXiv Detail & Related papers (2025-06-30T18:41:50Z) - Investigating Middle School Students Question-Asking and Answer-Evaluation Skills When Using ChatGPT for Science Investigation [18.913112043551045]
Generative AI (GenAI) tools such as ChatGPT allow users to explore and address a wide range of tasks.<n>This study examines middle school students ability to ask effective questions and critically evaluate ChatGPT responses.
arXiv Detail & Related papers (2025-05-02T08:38:17Z) - Investigating Youth AI Auditing [19.255894775715817]
This study explores the potential of youth (teens under the age of 18) to engage meaningfully in responsible AI (RAI)<n>We investigated how youth can actively identify problematic behaviors in youth-relevant ubiquitous AI.<n>We found that youth can contribute quality insights, shaped by their expertise, lived experiences, and age-related knowledge.
arXiv Detail & Related papers (2025-02-25T19:02:26Z) - Learning About Algorithm Auditing in Five Steps: Scaffolding How High School Youth Can Systematically and Critically Evaluate Machine Learning Applications [0.41942958779358674]
algorithm auditing is a method for understanding algorithmic systems' opaque inner workings and external impacts from the outside in.<n>This paper proposes five steps that can support young people in auditing algorithms.
arXiv Detail & Related papers (2024-12-09T20:55:54Z) - Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks [45.23431596135002]
This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task.
Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved.
arXiv Detail & Related papers (2024-11-15T13:22:04Z) - ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning [78.42927884000673]
ExACT is an approach to combine test-time search and self-learning to build o1-like models for agentic applications.<n>We first introduce Reflective Monte Carlo Tree Search (R-MCTS), a novel test time algorithm designed to enhance AI agents' ability to explore decision space on the fly.<n>Next, we introduce Exploratory Learning, a novel learning strategy to teach agents to search at inference time without relying on any external search algorithms.
arXiv Detail & Related papers (2024-10-02T21:42:35Z) - Youth as Peer Auditors: Engaging Teenagers with Algorithm Auditing of Machine Learning Applications [0.44998333629984877]
This paper positions youth as auditors of their peers' machine learning (ML)-powered applications.
In a two-week workshop, 13 youth (ages 14-15) designed and audited ML-powered applications.
arXiv Detail & Related papers (2024-04-08T21:15:26Z) - Social network analysis for personalized characterization and risk
assessment of alcohol use disorders in adolescents using semantic
technologies [42.29248343585333]
Alcohol Use Disorder (AUD) is a major concern for public health organizations worldwide.
This paper shows how a knowledge model is constructed, and compares the results obtained using the traditional method with this, fully automated model.
arXiv Detail & Related papers (2024-02-14T16:09:05Z) - Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using Matchmaking for AI [46.40919004160953]
We investigate a co-design method, Matchmaking for AI, to enable fact-checkers, designers, and NLP researchers to collaboratively identify what fact-checker needs should be addressed by technology.
Co-design sessions we conducted with 22 professional fact-checkers yielded a set of 11 design ideas that offer a "north star"
Our work provides new insights into both human-centered fact-checking research and practice and AI co-design research.
arXiv Detail & Related papers (2023-08-14T15:31:32Z) - An Uncommon Task: Participatory Design in Legal AI [64.54460979588075]
We examine a notable yet understudied AI design process in the legal domain that took place over a decade ago.
We show how an interactive simulation methodology allowed computer scientists and lawyers to become co-designers.
arXiv Detail & Related papers (2022-03-08T15:46:52Z) - Personalized Education in the AI Era: What to Expect Next? [76.37000521334585]
The objective of personalized learning is to design an effective knowledge acquisition track that matches the learner's strengths and bypasses her weaknesses to meet her desired goal.
In recent years, the boost of artificial intelligence (AI) and machine learning (ML) has unfolded novel perspectives to enhance personalized education.
arXiv Detail & Related papers (2021-01-19T12:23:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.