Unpacking Approaches to Learning and Teaching Machine Learning in K-12 Education: Transparency, Ethics, and Design Activities
- URL: http://arxiv.org/abs/2406.03480v3
- Date: Tue, 3 Sep 2024 14:21:42 GMT
- Title: Unpacking Approaches to Learning and Teaching Machine Learning in K-12 Education: Transparency, Ethics, and Design Activities
- Authors: Luis Morales-Navarro, Yasmin B. Kafai,
- Abstract summary: We identify three approaches to how learning and teaching machine learning could be conceptualized.
One of them, a data-driven approach, emphasizes providing young people with opportunities to create data sets, train, and test models.
A second approach, learning algorithm-driven, prioritizes learning about how the learning algorithms or engines behind how ML models work.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this conceptual paper, we review existing literature on artificial intelligence/machine learning (AI/ML) education to identify three approaches to how learning and teaching ML could be conceptualized. One of them, a data-driven approach, emphasizes providing young people with opportunities to create data sets, train, and test models. A second approach, learning algorithm-driven, prioritizes learning about how the learning algorithms or engines behind how ML models work. In addition, we identify efforts within a third approach that integrates the previous two. In our review, we focus on how the approaches: (1) glassbox and blackbox different aspects of ML, (2) build on learner interests and provide opportunities for designing applications, (3) integrate ethics and justice. In the discussion, we address the challenges and opportunities of current approaches and suggest future directions for the design of learning activities.
Related papers
- Let Students Take the Wheel: Introducing Post-Quantum Cryptography with Active Learning [4.804847392457553]
Post-quantum cryptography (PQC) has been identified as the solution to secure existing software systems.
This research proposes a novel active learning approach and assesses the best practices for teaching PQC to undergraduate and graduate students.
arXiv Detail & Related papers (2024-10-17T01:52:03Z) - Learning Interpretable Concepts: Unifying Causal Representation Learning
and Foundation Models [51.43538150982291]
We study how to learn human-interpretable concepts from data.
Weaving together ideas from both fields, we show that concepts can be provably recovered from diverse data.
arXiv Detail & Related papers (2024-02-14T15:23:59Z) - Social Learning: Towards Collaborative Learning with Large Language
Models [10.24107243529341]
We introduce the framework of "social learning" in the context of large language models (LLMs)
We present and evaluate two approaches for knowledge transfer between LLMs.
We show that performance using these methods is comparable to results with the use of original labels and prompts.
arXiv Detail & Related papers (2023-12-18T18:44:10Z) - Unleash Model Potential: Bootstrapped Meta Self-supervised Learning [12.57396771974944]
Long-term goal of machine learning is to learn general visual representations from a small amount of data without supervision.
Self-supervised learning and meta-learning are two promising techniques to achieve this goal, but they both only partially capture the advantages.
We propose a novel Bootstrapped Meta Self-Supervised Learning framework that aims to simulate the human learning process.
arXiv Detail & Related papers (2023-08-28T02:49:07Z) - Deep Active Learning for Computer Vision: Past and Future [50.19394935978135]
Despite its indispensable role for developing AI models, research on active learning is not as intensive as other research directions.
By addressing data automation challenges and coping with automated machine learning systems, active learning will facilitate democratization of AI technologies.
arXiv Detail & Related papers (2022-11-27T13:07:14Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Sharing to learn and learning to share; Fitting together Meta-Learning, Multi-Task Learning, and Transfer Learning: A meta review [4.462334751640166]
This article reviews research studies that combine (two of) these learning algorithms.
Based on the knowledge accumulated from the literature, we hypothesize a generic task-agnostic and model-agnostic learning network.
arXiv Detail & Related papers (2021-11-23T20:41:06Z) - Learning Data Teaching Strategies Via Knowledge Tracing [5.648636668261282]
We propose a novel method, called Knowledge Augmented Data Teaching (KADT), to optimize a data teaching strategy for a student model.
The KADT method incorporates a knowledge tracing model to dynamically capture the knowledge progress of a student model in terms of latent learning concepts.
We have evaluated the performance of the KADT method on four different machine learning tasks including knowledge tracing, sentiment analysis, movie recommendation, and image classification.
arXiv Detail & Related papers (2021-11-13T10:10:48Z) - Panoramic Learning with A Standardized Machine Learning Formalism [116.34627789412102]
This paper presents a standardized equation of the learning objective, that offers a unifying understanding of diverse ML algorithms.
It also provides guidance for mechanic design of new ML solutions, and serves as a promising vehicle towards panoramic learning with all experiences.
arXiv Detail & Related papers (2021-08-17T17:44:38Z) - Online Structured Meta-learning [137.48138166279313]
Current online meta-learning algorithms are limited to learn a globally-shared meta-learner.
We propose an online structured meta-learning (OSML) framework to overcome this limitation.
Experiments on three datasets demonstrate the effectiveness and interpretability of our proposed framework.
arXiv Detail & Related papers (2020-10-22T09:10:31Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.