Learning About Algorithm Auditing in Five Steps: Scaffolding How High School Youth Can Systematically and Critically Evaluate Machine Learning Applications
- URL: http://arxiv.org/abs/2412.06989v3
- Date: Fri, 10 Jan 2025 19:05:56 GMT
- Title: Learning About Algorithm Auditing in Five Steps: Scaffolding How High School Youth Can Systematically and Critically Evaluate Machine Learning Applications
- Authors: Luis Morales-Navarro, Yasmin B. Kafai, Lauren Vogelstein, Evelyn Yu, Danaƫ Metaxa,
- Abstract summary: algorithm auditing is a method for understanding algorithmic systems' opaque inner workings and external impacts from the outside in.<n>This paper proposes five steps that can support young people in auditing algorithms.
- Score: 0.41942958779358674
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While there is widespread interest in supporting young people to critically evaluate machine learning-powered systems, there is little research on how we can support them in inquiring about how these systems work and what their limitations and implications may be. Outside of K-12 education, an effective strategy in evaluating black-boxed systems is algorithm auditing-a method for understanding algorithmic systems' opaque inner workings and external impacts from the outside in. In this paper, we review how expert researchers conduct algorithm audits and how end users engage in auditing practices to propose five steps that, when incorporated into learning activities, can support young people in auditing algorithms. We present a case study of a team of teenagers engaging with each step during an out-of-school workshop in which they audited peer-designed generative AI TikTok filters. We discuss the kind of scaffolds we provided to support youth in algorithm auditing and directions and challenges for integrating algorithm auditing into classroom activities. This paper contributes: (a) a conceptualization of five steps to scaffold algorithm auditing learning activities, and (b) examples of how youth engaged with each step during our pilot study.
Related papers
- Providing Information About Implemented Algorithms Improves Program Comprehension: A Controlled Experiment [46.198289193451146]
Annotating source code with algorithm labels significantly improves program comprehension.
A majority of participants perceived the labels as helpful, especially for recognizing the codes intent.
Reasons for self-implementing algorithms included library inadequacies, performance needs and avoiding dependencies or licensing costs.
arXiv Detail & Related papers (2025-04-27T13:08:30Z) - Youth as Advisors in Participatory Design: Situating Teens' Expertise in Everyday Algorithm Auditing with Teachers and Researchers [0.0]
We situate youth as advisors to a group of high school computer science teacher- and researcher-designers creating learning activities.
Specifically, we explore algorithm auditing as a potential entry point for youth and adults to critically evaluate generative AI algorithmic systems.
arXiv Detail & Related papers (2025-04-09T18:27:17Z) - Open-Book Neural Algorithmic Reasoning [5.057669848157507]
We propose a novel open-book learning framework for neural networks.
In this framework, the network can access and utilize all instances in the training dataset when reasoning for a given instance.
We show that this open-book attention mechanism offers insights into the inherent relationships among various tasks in the benchmark.
arXiv Detail & Related papers (2024-12-30T02:14:58Z) - Youth as Peer Auditors: Engaging Teenagers with Algorithm Auditing of Machine Learning Applications [0.44998333629984877]
This paper positions youth as auditors of their peers' machine learning (ML)-powered applications.
In a two-week workshop, 13 youth (ages 14-15) designed and audited ML-powered applications.
arXiv Detail & Related papers (2024-04-08T21:15:26Z) - Benchmarking Actor-Critic Deep Reinforcement Learning Algorithms for
Robotics Control with Action Constraints [9.293472255463454]
This study presents a benchmark for evaluating action-constrained reinforcement learning (RL) algorithms.
We evaluate existing algorithms and their novel variants across multiple robotics control environments.
arXiv Detail & Related papers (2023-04-18T05:45:09Z) - Teachable Reinforcement Learning via Advice Distillation [161.43457947665073]
We propose a new supervision paradigm for interactive learning based on "teachable" decision-making systems that learn from structured advice provided by an external teacher.
We show that agents that learn from advice can acquire new skills with significantly less human supervision than standard reinforcement learning algorithms.
arXiv Detail & Related papers (2022-03-19T03:22:57Z) - Scaling up Search Engine Audits: Practical Insights for Algorithm
Auditing [68.8204255655161]
We set up experiments for eight search engines with hundreds of virtual agents placed in different regions.
We demonstrate the successful performance of our research infrastructure across multiple data collections.
We conclude that virtual agents are a promising venue for monitoring the performance of algorithms across long periods of time.
arXiv Detail & Related papers (2021-06-10T15:49:58Z) - Everyday algorithm auditing: Understanding the power of everyday users
in surfacing harmful algorithmic behaviors [8.360589318502816]
We propose and explore the concept of everyday algorithm auditing, a process in which users detect, understand, and interrogate problematic machine behaviors.
We argue that everyday users are powerful in surfacing problematic machine behaviors that may elude detection via more centrally-organized forms of auditing.
arXiv Detail & Related papers (2021-05-06T21:50:47Z) - Mastering Rate based Curriculum Learning [78.45222238426246]
We argue that the notion of learning progress itself has several shortcomings that lead to a low sample efficiency for the learner.
We propose a new algorithm, based on the notion of mastering rate, that significantly outperforms learning progress-based algorithms.
arXiv Detail & Related papers (2020-08-14T16:34:01Z) - A Brief Look at Generalization in Visual Meta-Reinforcement Learning [56.50123642237106]
We evaluate the generalization performance of meta-reinforcement learning algorithms.
We find that these algorithms can display strong overfitting when they are evaluated on challenging tasks.
arXiv Detail & Related papers (2020-06-12T15:17:17Z) - Provably Efficient Exploration for Reinforcement Learning Using
Unsupervised Learning [96.78504087416654]
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems, we investigate when this paradigm is provably efficient.
We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a noregret tabular RL algorithm.
arXiv Detail & Related papers (2020-03-15T19:23:59Z) - Curriculum Learning for Reinforcement Learning Domains: A Framework and
Survey [53.73359052511171]
Reinforcement learning (RL) is a popular paradigm for addressing sequential decision tasks in which the agent has only limited environmental feedback.
We present a framework for curriculum learning (CL) in RL, and use it to survey and classify existing CL methods in terms of their assumptions, capabilities, and goals.
arXiv Detail & Related papers (2020-03-10T20:41:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.