Youth as Peer Auditors: Engaging Teenagers with Algorithm Auditing of Machine Learning Applications
- URL: http://arxiv.org/abs/2404.05874v3
- Date: Tue, 16 Apr 2024 14:57:09 GMT
- Title: Youth as Peer Auditors: Engaging Teenagers with Algorithm Auditing of Machine Learning Applications
- Authors: Luis Morales-Navarro, Yasmin B. Kafai, Vedya Konda, Danaƫ Metaxa,
- Abstract summary: This paper positions youth as auditors of their peers' machine learning (ML)-powered applications.
In a two-week workshop, 13 youth (ages 14-15) designed and audited ML-powered applications.
- Score: 0.44998333629984877
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As artificial intelligence/machine learning (AI/ML) applications become more pervasive in youth lives, supporting them to interact, design, and evaluate applications is crucial. This paper positions youth as auditors of their peers' ML-powered applications to better understand algorithmic systems' opaque inner workings and external impacts. In a two-week workshop, 13 youth (ages 14-15) designed and audited ML-powered applications. We analyzed pre/post clinical interviews in which youth were presented with auditing tasks. The analyses show that after the workshop all youth identified algorithmic biases and inferred dataset and model design issues. Youth also discussed algorithmic justice issues and ML model improvements. Furthermore, youth reflected that auditing provided them new perspectives on model functionality and ideas to improve their own models. This work contributes (1) a conceptualization of algorithm auditing for youth; and (2) empirical evidence of the potential benefits of auditing. We discuss potential uses of algorithm auditing in learning and child-computer interaction research.
Related papers
- MLGym: A New Framework and Benchmark for Advancing AI Research Agents [51.9387884953294]
We introduce Meta MLGym and MLGym-Bench, a new framework and benchmark for evaluating and developing large language models on AI research tasks.
This is the first Gym environment for machine learning (ML) tasks, enabling research on reinforcement learning (RL) algorithms for training such agents.
We evaluate a number of frontier large language models (LLMs) on our benchmarks such as Claude-3.5-Sonnet, Llama-3.1 405B, GPT-4o, o1-preview, and Gemini-1.5 Pro.
arXiv Detail & Related papers (2025-02-20T12:28:23Z) - Is ChatGPT Massively Used by Students Nowadays? A Survey on the Use of Large Language Models such as ChatGPT in Educational Settings [0.25782420501870296]
This study investigates how 395 students aged 13 to 25 years old in France and Italy integrate Large Language Models (LLMs) into their educational routines.
Key findings include the widespread use of these tools across all age groups and disciplines.
Results also show gender disparities, raising concerns about an emerging AI literacy and technological gender gap.
arXiv Detail & Related papers (2024-12-23T11:29:44Z) - Learning About Algorithm Auditing in Five Steps: Scaffolding How High School Youth Can Systematically and Critically Evaluate Machine Learning Applications [0.41942958779358674]
algorithm auditing is a method for understanding algorithmic systems' opaque inner workings and external impacts from the outside in.
This paper proposes five steps that can support young people in auditing algorithms.
arXiv Detail & Related papers (2024-12-09T20:55:54Z) - ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning [78.42927884000673]
ExACT is an approach to combine test-time search and self-learning to build o1-like models for agentic applications.
We first introduce Reflective Monte Carlo Tree Search (R-MCTS), a novel test time algorithm designed to enhance AI agents' ability to explore decision space on the fly.
Next, we introduce Exploratory Learning, a novel learning strategy to teach agents to search at inference time without relying on any external search algorithms.
arXiv Detail & Related papers (2024-10-02T21:42:35Z) - Toward In-Context Teaching: Adapting Examples to Students' Misconceptions [54.82965010592045]
We introduce a suite of models and evaluation methods we call AdapT.
AToM is a new probabilistic model for adaptive teaching that jointly infers students' past beliefs and optimize for the correctness of future beliefs.
Our results highlight both the difficulty of the adaptive teaching task and the potential of learned adaptive models for solving it.
arXiv Detail & Related papers (2024-05-07T17:05:27Z) - Investigating Youths' Everyday Understanding of Machine Learning Applications: a Knowledge-in-Pieces Perspective [0.0]
Despite recent calls for including artificial intelligence in K-12 education, not enough attention has been paid to studying youths' everyday knowledge about machine learning (ML)
We investigate teens' everyday understanding of ML through a knowledge-in-pieces perspective.
Our analyses reveal that youths showed some understanding that ML applications learn from training data and that applications recognize patterns in input data and depending on these provide different outputs.
arXiv Detail & Related papers (2024-03-31T16:11:33Z) - Not Just Training, Also Testing: High School Youths' Perspective-Taking
through Peer Testing Machine Learning-Powered Applications [0.0]
Testing machine learning applications can help creators of applications identify and address failure and edge cases.
We analyzed testing worksheets, audio and video recordings collected during a two week workshop in which 11 high school youths created physical computing projects.
We found that through peer-testing youths reflected on the size of their training datasets, the diversity of their training data, the design of their classes and the contexts in which they produced training data.
arXiv Detail & Related papers (2023-11-21T17:15:43Z) - Zero-shot Item-based Recommendation via Multi-task Product Knowledge
Graph Pre-Training [106.85813323510783]
This paper presents a novel paradigm for the Zero-Shot Item-based Recommendation (ZSIR) task.
It pre-trains a model on product knowledge graph (PKG) to refine the item features from PLMs.
We identify three challenges for pre-training PKG, which are multi-type relations in PKG, semantic divergence between item generic information and relations and domain discrepancy from PKG to downstream ZSIR task.
arXiv Detail & Related papers (2023-05-12T17:38:24Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.