Can We Delegate Learning to Automation?: A Comparative Study of LLM Chatbots, Search Engines, and Books
- URL: http://arxiv.org/abs/2410.01396v1
- Date: Wed, 2 Oct 2024 10:16:54 GMT
- Title: Can We Delegate Learning to Automation?: A Comparative Study of LLM Chatbots, Search Engines, and Books
- Authors: Yeonsun Yang, Ahyeon Shin, Mincheol Kang, Jiheon Kang, Jean Young Song,
- Abstract summary: The transition from traditional resources like textbooks and web searches raises concerns among educators.
In this paper, we systematically uncover three main concerns from educators' perspectives.
Our results show that LLMs support comprehensive understanding of key concepts without promoting passive learning, though their effectiveness in knowledge retention was limited.
- Score: 0.6776894728701932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning is a key motivator behind information search behavior. With the emergence of LLM-based chatbots, students are increasingly turning to these tools as their primary resource for acquiring knowledge. However, the transition from traditional resources like textbooks and web searches raises concerns among educators. They worry that these fully-automated LLMs might lead students to delegate critical steps of search as learning. In this paper, we systematically uncover three main concerns from educators' perspectives. In response to these concerns, we conducted a mixed-methods study with 92 university students to compare three learning sources with different automation levels. Our results show that LLMs support comprehensive understanding of key concepts without promoting passive learning, though their effectiveness in knowledge retention was limited. Additionally, we found that academic performance impacted both learning outcomes and search patterns. Notably, higher-competence learners engaged more deeply with content through reading-intensive behaviors rather than relying on search activities.
Related papers
- Position: LLMs Can be Good Tutors in Foreign Language Education [87.88557755407815]
We argue that large language models (LLMs) have the potential to serve as effective tutors in foreign language education (FLE)
Specifically, LLMs can play three critical roles: (1) as data enhancers, improving the creation of learning materials or serving as student simulations; (2) as task predictors, serving as learner assessment or optimizing learning pathway; and (3) as agents, enabling personalized and inclusive education.
arXiv Detail & Related papers (2025-02-08T06:48:49Z) - Web vs. LLMs: An Empirical Study of Learning Behaviors of CS2 Students [2.0624236247076406]
ChatGPT has been widely adopted by students in higher education as tools for learning programming and related concepts.
It remains unclear how effective students are and what strategies students use while learning with LLMs.
arXiv Detail & Related papers (2025-01-21T07:16:18Z) - Embracing AI in Education: Understanding the Surge in Large Language Model Use by Secondary Students [53.20318273452059]
Large language models (LLMs) like OpenAI's ChatGPT have opened up new avenues in education.
Despite school restrictions, our survey of over 300 middle and high school students revealed that a remarkable 70% of students have utilized LLMs.
We propose a few ideas to address such issues, including subject-specific models, personalized learning, and AI classrooms.
arXiv Detail & Related papers (2024-11-27T19:19:34Z) - Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs [49.18567856499736]
We investigate whether large language models (LLMs) can be supportive of open-ended dialogue tutoring.
We apply a range of knowledge tracing (KT) methods on the resulting labeled data to track student knowledge levels over an entire dialogue.
We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - Enhancing Exploratory Learning through Exploratory Search with the Emergence of Large Language Models [3.1997856595607024]
This study attempts to unpack this complexity by combining exploratory search strategies with the theories of exploratory learning.
Our work adapts Kolb's learning model by incorporating high-frequency exploration and feedback loops, aiming to promote deep cognitive and higher-order cognitive skill development in students.
arXiv Detail & Related papers (2024-08-09T04:30:16Z) - When Search Engine Services meet Large Language Models: Visions and Challenges [53.32948540004658]
This paper conducts an in-depth examination of how integrating Large Language Models with search engines can mutually benefit both technologies.
We focus on two main areas: using search engines to improve LLMs (Search4LLM) and enhancing search engine functions using LLMs (LLM4Search)
arXiv Detail & Related papers (2024-06-28T03:52:13Z) - Self-Tuning: Instructing LLMs to Effectively Acquire New Knowledge through Self-Teaching [67.11497198002165]
Large language models (LLMs) often struggle to provide up-to-date information.
Existing approaches typically involve continued pre-training on new documents.
Motivated by the success of the Feynman Technique in efficient human learning, we introduce Self-Tuning.
arXiv Detail & Related papers (2024-06-10T14:42:20Z) - Democratizing Reasoning Ability: Tailored Learning from Large Language
Model [97.4921006089966]
We propose a tailored learning approach to distill such reasoning ability to smaller LMs.
We exploit the potential of LLM as a reasoning teacher by building an interactive multi-round learning paradigm.
To exploit the reasoning potential of the smaller LM, we propose self-reflection learning to motivate the student to learn from self-made mistakes.
arXiv Detail & Related papers (2023-10-20T07:50:10Z) - Searching to Learn with Instructional Scaffolding [7.159235937301605]
This paper investigates the incorporation of scaffolding into a search system employing three different strategies.
AQE_SC, the automatic expansion of user queries with relevant subtopics; CURATED_SC, the presenting of a manually curated static list of relevant subtopics on the search engine result page.
FEEDBACK_SC, which projects real-time feedback about a user's exploration of the topic space on top of the CURATED_SC visualization.
arXiv Detail & Related papers (2021-11-29T15:15:02Z) - Sharing to learn and learning to share; Fitting together Meta-Learning, Multi-Task Learning, and Transfer Learning: A meta review [4.462334751640166]
This article reviews research studies that combine (two of) these learning algorithms.
Based on the knowledge accumulated from the literature, we hypothesize a generic task-agnostic and model-agnostic learning network.
arXiv Detail & Related papers (2021-11-23T20:41:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.