Not Just Training, Also Testing: High School Youths' Perspective-Taking
through Peer Testing Machine Learning-Powered Applications
- URL: http://arxiv.org/abs/2311.12733v2
- Date: Thu, 14 Dec 2023 14:06:21 GMT
- Title: Not Just Training, Also Testing: High School Youths' Perspective-Taking
through Peer Testing Machine Learning-Powered Applications
- Authors: L. Morales-Navarro, M. Shah, Y. B. Kafai
- Abstract summary: Testing machine learning applications can help creators of applications identify and address failure and edge cases.
We analyzed testing worksheets, audio and video recordings collected during a two week workshop in which 11 high school youths created physical computing projects.
We found that through peer-testing youths reflected on the size of their training datasets, the diversity of their training data, the design of their classes and the contexts in which they produced training data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Most attention in K-12 artificial intelligence and machine learning (AI/ML)
education has been given to having youths train models, with much less
attention to the equally important testing of models when creating machine
learning applications. Testing ML applications allows for the evaluation of
models against predictions and can help creators of applications identify and
address failure and edge cases that could negatively impact user experiences.
We investigate how testing each other's projects supported youths to take
perspective about functionality, performance, and potential issues in their own
projects. We analyzed testing worksheets, audio and video recordings collected
during a two week workshop in which 11 high school youths created physical
computing projects that included (audio, pose, and image) ML classifiers. We
found that through peer-testing youths reflected on the size of their training
datasets, the diversity of their training data, the design of their classes and
the contexts in which they produced training data. We discuss future directions
for research on peer-testing in AI/ML education and current limitations for
these kinds of activities.
Related papers
- LLMs Integration in Software Engineering Team Projects: Roles, Impact, and a Pedagogical Design Space for AI Tools in Computing Education [7.058964784190549]
This work takes a pedagogical lens to explore the implications of generative AI (GenAI) models and tools, such as ChatGPT and GitHub Copilot.
Our results address a particular gap in understanding the role and implications of GenAI on teamwork, team-efficacy, and team dynamics.
arXiv Detail & Related papers (2024-10-30T14:43:33Z) - Detecting Unsuccessful Students in Cybersecurity Exercises in Two Different Learning Environments [0.37729165787434493]
This paper develops automated tools to predict when a student is having difficulty.
In a potential application, such models can aid instructors in detecting struggling students and providing targeted help.
arXiv Detail & Related papers (2024-08-16T04:57:54Z) - Youth as Peer Auditors: Engaging Teenagers with Algorithm Auditing of Machine Learning Applications [0.44998333629984877]
This paper positions youth as auditors of their peers' machine learning (ML)-powered applications.
In a two-week workshop, 13 youth (ages 14-15) designed and audited ML-powered applications.
arXiv Detail & Related papers (2024-04-08T21:15:26Z) - Investigating Youths' Everyday Understanding of Machine Learning Applications: a Knowledge-in-Pieces Perspective [0.0]
Despite recent calls for including artificial intelligence in K-12 education, not enough attention has been paid to studying youths' everyday knowledge about machine learning (ML)
We investigate teens' everyday understanding of ML through a knowledge-in-pieces perspective.
Our analyses reveal that youths showed some understanding that ML applications learn from training data and that applications recognize patterns in input data and depending on these provide different outputs.
arXiv Detail & Related papers (2024-03-31T16:11:33Z) - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models [74.58014281829946]
We analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on public models.
Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models.
arXiv Detail & Related papers (2023-10-19T11:49:22Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - Exploring Machine Teaching with Children [9.212643929029403]
Iteratively building and testing machine learning models can help children develop creativity, flexibility, and comfort with machine learning and artificial intelligence.
We explore how children use machine teaching interfaces with a team of 14 children (aged 7-13 years) and adult co-designers.
arXiv Detail & Related papers (2021-09-23T15:18:53Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Few-Cost Salient Object Detection with Adversarial-Paced Learning [95.0220555274653]
This paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only.
We name this task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario.
arXiv Detail & Related papers (2021-04-05T14:15:49Z) - Personalized Education in the AI Era: What to Expect Next? [76.37000521334585]
The objective of personalized learning is to design an effective knowledge acquisition track that matches the learner's strengths and bypasses her weaknesses to meet her desired goal.
In recent years, the boost of artificial intelligence (AI) and machine learning (ML) has unfolded novel perspectives to enhance personalized education.
arXiv Detail & Related papers (2021-01-19T12:23:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.