Learning About Algorithm Auditing in Five Steps: Scaffolding How High School Youth Can Systematically and Critically Evaluate Machine Learning Applications
- URL: http://arxiv.org/abs/2412.06989v3
- Date: Fri, 10 Jan 2025 19:05:56 GMT
- Title: Learning About Algorithm Auditing in Five Steps: Scaffolding How High School Youth Can Systematically and Critically Evaluate Machine Learning Applications
- Authors: Luis Morales-Navarro, Yasmin B. Kafai, Lauren Vogelstein, Evelyn Yu, Danaƫ Metaxa,
- Abstract summary: algorithm auditing is a method for understanding algorithmic systems' opaque inner workings and external impacts from the outside in.
This paper proposes five steps that can support young people in auditing algorithms.
- Score: 0.41942958779358674
- License:
- Abstract: While there is widespread interest in supporting young people to critically evaluate machine learning-powered systems, there is little research on how we can support them in inquiring about how these systems work and what their limitations and implications may be. Outside of K-12 education, an effective strategy in evaluating black-boxed systems is algorithm auditing-a method for understanding algorithmic systems' opaque inner workings and external impacts from the outside in. In this paper, we review how expert researchers conduct algorithm audits and how end users engage in auditing practices to propose five steps that, when incorporated into learning activities, can support young people in auditing algorithms. We present a case study of a team of teenagers engaging with each step during an out-of-school workshop in which they audited peer-designed generative AI TikTok filters. We discuss the kind of scaffolds we provided to support youth in algorithm auditing and directions and challenges for integrating algorithm auditing into classroom activities. This paper contributes: (a) a conceptualization of five steps to scaffold algorithm auditing learning activities, and (b) examples of how youth engaged with each step during our pilot study.
Related papers
- Online inductive learning from answer sets for efficient reinforcement learning exploration [52.03682298194168]
We exploit inductive learning of answer set programs to learn a set of logical rules representing an explainable approximation of the agent policy.
We then perform answer set reasoning on the learned rules to guide the exploration of the learning agent at the next batch.
Our methodology produces a significant boost in the discounted return achieved by the agent, even in the first batches of training.
arXiv Detail & Related papers (2025-01-13T16:13:22Z) - Open-Book Neural Algorithmic Reasoning [5.057669848157507]
We propose a novel open-book learning framework for neural networks.
In this framework, the network can access and utilize all instances in the training dataset when reasoning for a given instance.
We show that this open-book attention mechanism offers insights into the inherent relationships among various tasks in the benchmark.
arXiv Detail & Related papers (2024-12-30T02:14:58Z) - Youth as Peer Auditors: Engaging Teenagers with Algorithm Auditing of Machine Learning Applications [0.44998333629984877]
This paper positions youth as auditors of their peers' machine learning (ML)-powered applications.
In a two-week workshop, 13 youth (ages 14-15) designed and audited ML-powered applications.
arXiv Detail & Related papers (2024-04-08T21:15:26Z) - Benchmarking Actor-Critic Deep Reinforcement Learning Algorithms for
Robotics Control with Action Constraints [9.293472255463454]
This study presents a benchmark for evaluating action-constrained reinforcement learning (RL) algorithms.
We evaluate existing algorithms and their novel variants across multiple robotics control environments.
arXiv Detail & Related papers (2023-04-18T05:45:09Z) - Teachable Reinforcement Learning via Advice Distillation [161.43457947665073]
We propose a new supervision paradigm for interactive learning based on "teachable" decision-making systems that learn from structured advice provided by an external teacher.
We show that agents that learn from advice can acquire new skills with significantly less human supervision than standard reinforcement learning algorithms.
arXiv Detail & Related papers (2022-03-19T03:22:57Z) - Scaling up Search Engine Audits: Practical Insights for Algorithm
Auditing [68.8204255655161]
We set up experiments for eight search engines with hundreds of virtual agents placed in different regions.
We demonstrate the successful performance of our research infrastructure across multiple data collections.
We conclude that virtual agents are a promising venue for monitoring the performance of algorithms across long periods of time.
arXiv Detail & Related papers (2021-06-10T15:49:58Z) - Everyday algorithm auditing: Understanding the power of everyday users
in surfacing harmful algorithmic behaviors [8.360589318502816]
We propose and explore the concept of everyday algorithm auditing, a process in which users detect, understand, and interrogate problematic machine behaviors.
We argue that everyday users are powerful in surfacing problematic machine behaviors that may elude detection via more centrally-organized forms of auditing.
arXiv Detail & Related papers (2021-05-06T21:50:47Z) - Mastering Rate based Curriculum Learning [78.45222238426246]
We argue that the notion of learning progress itself has several shortcomings that lead to a low sample efficiency for the learner.
We propose a new algorithm, based on the notion of mastering rate, that significantly outperforms learning progress-based algorithms.
arXiv Detail & Related papers (2020-08-14T16:34:01Z) - Designing for Critical Algorithmic Literacies [11.6402289139761]
Children's ability to interrogate computational algorithms has become crucially important.
A growing body of work has attempted to articulate a set of "literacies" to describe the intellectual tools that children can use to understand, interrogate, and critique the algorithmic systems that shape their lives.
We present four design principles that we argue can help children develop that allow them to understand not only how algorithms work, but also to critique and question them.
arXiv Detail & Related papers (2020-08-04T17:51:02Z) - A Brief Look at Generalization in Visual Meta-Reinforcement Learning [56.50123642237106]
We evaluate the generalization performance of meta-reinforcement learning algorithms.
We find that these algorithms can display strong overfitting when they are evaluated on challenging tasks.
arXiv Detail & Related papers (2020-06-12T15:17:17Z) - Provably Efficient Exploration for Reinforcement Learning Using
Unsupervised Learning [96.78504087416654]
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems, we investigate when this paradigm is provably efficient.
We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a noregret tabular RL algorithm.
arXiv Detail & Related papers (2020-03-15T19:23:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.