Can We Delegate Learning to Automation?: A Comparative Study of LLM Chatbots, Search Engines, and Books
- URL: http://arxiv.org/abs/2410.01396v1
- Date: Wed, 2 Oct 2024 10:16:54 GMT
- Title: Can We Delegate Learning to Automation?: A Comparative Study of LLM Chatbots, Search Engines, and Books
- Authors: Yeonsun Yang, Ahyeon Shin, Mincheol Kang, Jiheon Kang, Jean Young Song,
- Abstract summary: The transition from traditional resources like textbooks and web searches raises concerns among educators.
In this paper, we systematically uncover three main concerns from educators' perspectives.
Our results show that LLMs support comprehensive understanding of key concepts without promoting passive learning, though their effectiveness in knowledge retention was limited.
- Score: 0.6776894728701932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning is a key motivator behind information search behavior. With the emergence of LLM-based chatbots, students are increasingly turning to these tools as their primary resource for acquiring knowledge. However, the transition from traditional resources like textbooks and web searches raises concerns among educators. They worry that these fully-automated LLMs might lead students to delegate critical steps of search as learning. In this paper, we systematically uncover three main concerns from educators' perspectives. In response to these concerns, we conducted a mixed-methods study with 92 university students to compare three learning sources with different automation levels. Our results show that LLMs support comprehensive understanding of key concepts without promoting passive learning, though their effectiveness in knowledge retention was limited. Additionally, we found that academic performance impacted both learning outcomes and search patterns. Notably, higher-competence learners engaged more deeply with content through reading-intensive behaviors rather than relying on search activities.
Related papers
- Enhancing Exploratory Learning through Exploratory Search with the Emergence of Large Language Models [3.1997856595607024]
This study attempts to unpack this complexity by combining exploratory search strategies with the theories of exploratory learning.
Our work adapts Kolb's learning model by incorporating high-frequency exploration and feedback loops, aiming to promote deep cognitive and higher-order cognitive skill development in students.
arXiv Detail & Related papers (2024-08-09T04:30:16Z) - When Search Engine Services meet Large Language Models: Visions and Challenges [53.32948540004658]
This paper conducts an in-depth examination of how integrating Large Language Models with search engines can mutually benefit both technologies.
We focus on two main areas: using search engines to improve LLMs (Search4LLM) and enhancing search engine functions using LLMs (LLM4Search)
arXiv Detail & Related papers (2024-06-28T03:52:13Z) - Self-Tuning: Instructing LLMs to Effectively Acquire New Knowledge through Self-Teaching [67.11497198002165]
Large language models (LLMs) often struggle to provide up-to-date information due to their one-time training.
Motivated by the remarkable success of the Feynman Technique in efficient human learning, we introduce Self-Tuning.
arXiv Detail & Related papers (2024-06-10T14:42:20Z) - Automate Knowledge Concept Tagging on Math Questions with LLMs [48.5585921817745]
Knowledge concept tagging for questions plays a crucial role in contemporary intelligent educational applications.
Traditionally, these annotations have been conducted manually with help from pedagogical experts.
In this paper, we explore the automating the tagging task using Large Language Models (LLMs)
arXiv Detail & Related papers (2024-03-26T00:09:38Z) - Democratizing Reasoning Ability: Tailored Learning from Large Language
Model [97.4921006089966]
We propose a tailored learning approach to distill such reasoning ability to smaller LMs.
We exploit the potential of LLM as a reasoning teacher by building an interactive multi-round learning paradigm.
To exploit the reasoning potential of the smaller LM, we propose self-reflection learning to motivate the student to learn from self-made mistakes.
arXiv Detail & Related papers (2023-10-20T07:50:10Z) - Unleash Model Potential: Bootstrapped Meta Self-supervised Learning [12.57396771974944]
Long-term goal of machine learning is to learn general visual representations from a small amount of data without supervision.
Self-supervised learning and meta-learning are two promising techniques to achieve this goal, but they both only partially capture the advantages.
We propose a novel Bootstrapped Meta Self-Supervised Learning framework that aims to simulate the human learning process.
arXiv Detail & Related papers (2023-08-28T02:49:07Z) - Searching to Learn with Instructional Scaffolding [7.159235937301605]
This paper investigates the incorporation of scaffolding into a search system employing three different strategies.
AQE_SC, the automatic expansion of user queries with relevant subtopics; CURATED_SC, the presenting of a manually curated static list of relevant subtopics on the search engine result page.
FEEDBACK_SC, which projects real-time feedback about a user's exploration of the topic space on top of the CURATED_SC visualization.
arXiv Detail & Related papers (2021-11-29T15:15:02Z) - Sharing to learn and learning to share; Fitting together Meta-Learning, Multi-Task Learning, and Transfer Learning: A meta review [4.462334751640166]
This article reviews research studies that combine (two of) these learning algorithms.
Based on the knowledge accumulated from the literature, we hypothesize a generic task-agnostic and model-agnostic learning network.
arXiv Detail & Related papers (2021-11-23T20:41:06Z) - Knowledge-Aware Meta-learning for Low-Resource Text Classification [87.89624590579903]
This paper studies a low-resource text classification problem and bridges the gap between meta-training and meta-testing tasks.
We propose KGML to introduce additional representation for each sentence learned from the extracted sentence-specific knowledge graph.
arXiv Detail & Related papers (2021-09-10T07:20:43Z) - Curriculum Learning for Reinforcement Learning Domains: A Framework and
Survey [53.73359052511171]
Reinforcement learning (RL) is a popular paradigm for addressing sequential decision tasks in which the agent has only limited environmental feedback.
We present a framework for curriculum learning (CL) in RL, and use it to survey and classify existing CL methods in terms of their assumptions, capabilities, and goals.
arXiv Detail & Related papers (2020-03-10T20:41:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.