ML-Based Teaching Systems: A Conceptual Framework
- URL: http://arxiv.org/abs/2305.07681v1
- Date: Fri, 12 May 2023 09:55:34 GMT
- Title: ML-Based Teaching Systems: A Conceptual Framework
- Authors: Philipp Spitzer, Niklas K\"uhl, Daniel Heinz, Gerhard Satzger
- Abstract summary: We investigate the potential of machine learning (ML) models to facilitate knowledge transfer in an organizational context.
We examine key concepts, themes, and dimensions to better understand and design ML-based teaching systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As the shortage of skilled workers continues to be a pressing issue,
exacerbated by demographic change, it is becoming a critical challenge for
organizations to preserve the knowledge of retiring experts and to pass it on
to novices. While this knowledge transfer has traditionally taken place through
personal interaction, it lacks scalability and requires significant resources
and time. IT-based teaching systems have addressed this scalability issue, but
their development is still tedious and time-consuming. In this work, we
investigate the potential of machine learning (ML) models to facilitate
knowledge transfer in an organizational context, leading to more cost-effective
IT-based teaching systems. Through a systematic literature review, we examine
key concepts, themes, and dimensions to better understand and design ML-based
teaching systems. To do so, we capture and consolidate the capabilities of ML
models in IT-based teaching systems, inductively analyze relevant concepts in
this context, and determine their interrelationships. We present our findings
in the form of a review of the key concepts, themes, and dimensions to
understand and inform on ML-based teaching systems. Building on these results,
our work contributes to research on computer-supported cooperative work by
conceptualizing how ML-based teaching systems can preserve expert knowledge and
facilitate its transfer from SMEs to human novices. In this way, we shed light
on this emerging subfield of human-computer interaction and serve to build an
interdisciplinary research agenda.
Related papers
- How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training [92.88889953768455]
Large Language Models (LLMs) face a critical gap in understanding how they internalize new knowledge.
We identify computational subgraphs that facilitate knowledge storage and processing.
arXiv Detail & Related papers (2025-02-16T16:55:43Z) - Education in the Era of Neurosymbolic AI [0.6468510459310326]
We propose a system that leverages the unique affordances of pedagogical agents as critical components of a hybrid NAI architecture.
We conclude that education in the era of NAI will make learning more accessible, equitable, and aligned with real-world skills.
arXiv Detail & Related papers (2024-11-16T19:18:39Z) - Knowledge Tagging with Large Language Model based Multi-Agent System [17.53518487546791]
This paper investigates the use of a multi-agent system to address the limitations of previous algorithms.
We highlight the significant potential of an LLM-based multi-agent system in overcoming the challenges that previous methods have encountered.
arXiv Detail & Related papers (2024-09-12T21:39:01Z) - Knowledge Mechanisms in Large Language Models: A Survey and Perspective [88.51320482620679]
This paper reviews knowledge mechanism analysis from a novel taxonomy including knowledge utilization and evolution.
We discuss what knowledge LLMs have learned, the reasons for the fragility of parametric knowledge, and the potential dark knowledge (hypothesis) that will be challenging to address.
arXiv Detail & Related papers (2024-07-22T06:15:59Z) - Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever [48.5585921817745]
Large Language Models (LLMs) are used to automate the knowledge tagging task.
We show the strong performance of zero- and few-shot results over math questions knowledge tagging tasks.
By proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs.
arXiv Detail & Related papers (2024-06-19T23:30:01Z) - Towards Automated Knowledge Integration From Human-Interpretable Representations [55.2480439325792]
We introduce and motivate theoretically the principles of informed meta-learning enabling automated and controllable inductive bias selection.
We empirically demonstrate the potential benefits and limitations of informed meta-learning in improving data efficiency and generalisation.
arXiv Detail & Related papers (2024-02-25T15:08:37Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Toward enriched Cognitive Learning with XAI [44.99833362998488]
We introduce an intelligent system (CL-XAI) for Cognitive Learning which is supported by artificial intelligence (AI) tools.
The use of CL-XAI is illustrated with a game-inspired virtual use case where learners tackle problems to enhance problem-solving skills.
arXiv Detail & Related papers (2023-12-19T16:13:47Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.