InqEduAgent: Adaptive AI Learning Partners with Gaussian Process Augmentation
- URL: http://arxiv.org/abs/2508.03174v2
- Date: Wed, 06 Aug 2025 15:28:04 GMT
- Title: InqEduAgent: Adaptive AI Learning Partners with Gaussian Process Augmentation
- Authors: Tian-Fang Zhao, Wen-Xi Yang, Guan Liu, Liang Yang,
- Abstract summary: This paper proposes an LLM-empowered agent model for simulating and selecting learning partners tailored to inquiry-oriented learning.<n>Generative agents are designed to capture cognitive and evaluative features of learners in real-world scenarios.<n>The experimental results show the optimal performance of InqEduAgent in most knowledge-learning scenarios and LLM environment.
- Score: 4.96669107440958
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative partnership matters in inquiry-oriented education. However, most study partners are selected either rely on experience-based assignments with little scientific planning or build on rule-based machine assistants, encountering difficulties in knowledge expansion and inadequate flexibility. This paper proposes an LLM-empowered agent model for simulating and selecting learning partners tailored to inquiry-oriented learning, named InqEduAgent. Generative agents are designed to capture cognitive and evaluative features of learners in real-world scenarios. Then, an adaptive matching algorithm with Gaussian process augmentation is formulated to identify patterns within prior knowledge. Optimal learning-partner matches are provided for learners facing different exercises. The experimental results show the optimal performance of InqEduAgent in most knowledge-learning scenarios and LLM environment with different levels of capabilities. This study promotes the intelligent allocation of human-based learning partners and the formulation of AI-based learning partners. The code, data, and appendix are publicly available at https://github.com/InqEduAgent/InqEduAgent.
Related papers
- LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System [54.71619734800526]
GenMentor is a multi-agent framework designed to deliver goal-oriented, personalized learning within ITS.<n>It maps learners' goals to required skills using a fine-tuned LLM trained on a custom goal-to-skill dataset.<n>GenMentor tailors learning content with an exploration-drafting-integration mechanism to align with individual learner needs.
arXiv Detail & Related papers (2025-01-27T03:29:44Z) - TutorLLM: Customizing Learning Recommendations with Knowledge Tracing and Retrieval-Augmented Generation [44.18659233932457]
TutorLLM is a personalized learning recommender system based on Knowledge Tracing (KT) and Retrieval-Augmented Generation (RAG)<n>The novelty of TutorLLM lies in its unique combination of KT and RAG techniques with LLMs, which enables dynamic retrieval of context-specific knowledge.<n>The evaluation includes user assessment questionnaires and performance metrics, demonstrating a 10% improvement in user satisfaction.
arXiv Detail & Related papers (2025-01-20T21:18:43Z) - Agent4Edu: Generating Learner Response Data by Generative Agents for Intelligent Education Systems [27.161576657380646]
Agent4Edu is a novel personalized learning simulator leveraging recent advancements in human intelligence through large language models (LLMs)<n>The learner profiles are using real-world response data, capturing practice styles and cognitive factors.<n>Each agent can interact with personalized learning algorithms, such as computerized adaptive testing.
arXiv Detail & Related papers (2025-01-17T18:05:04Z) - KBAlign: Efficient Self Adaptation on Specific Knowledge Bases [73.34893326181046]
We present KBAlign, a self-supervised framework that enhances RAG systems through efficient model adaptation.<n>Our key insight is to leverage the model's intrinsic capabilities for knowledge alignment through two innovative mechanisms.<n> Experiments demonstrate that KBAlign can achieve 90% of the performance gain obtained through GPT-4-supervised adaptation.
arXiv Detail & Related papers (2024-11-22T08:21:03Z) - From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.<n>We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - EVOLvE: Evaluating and Optimizing LLMs For In-Context Exploration [76.66831821738927]
Large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty.<n>We measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications.<n>Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs.
arXiv Detail & Related papers (2024-10-08T17:54:03Z) - ComfyBench: Benchmarking LLM-based Agents in ComfyUI for Autonomously Designing Collaborative AI Systems [80.69865295743149]
This work attempts to study using LLM-based agents to design collaborative AI systems autonomously.<n>Based on ComfyBench, we develop ComfyAgent, a framework that empowers agents to autonomously design collaborative AI systems by generating.<n>While ComfyAgent achieves a comparable resolve rate to o1-preview and significantly surpasses other agents on ComfyBench, ComfyAgent has resolved only 15% of creative tasks.
arXiv Detail & Related papers (2024-09-02T17:44:10Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - A Reinforcement Learning-assisted Genetic Programming Algorithm for Team
Formation Problem Considering Person-Job Matching [70.28786574064694]
A reinforcement learning-assisted genetic programming algorithm (RL-GP) is proposed to enhance the quality of solutions.
The hyper-heuristic rules obtained through efficient learning can be utilized as decision-making aids when forming project teams.
arXiv Detail & Related papers (2023-04-08T14:32:12Z) - Hybrid Learning for Orchestrating Deep Learning Inference in Multi-user
Edge-cloud Networks [3.7630209350186807]
Collaborative end-edge-cloud computing for deep learning provides a range of performance and efficiency.
Deep Learning inference orchestration strategy employs reinforcement learning to find the optimal orchestration policy.
We demonstrate efficacy of our HL strategy through experimental comparison with state-of-the-art RL-based inference orchestration.
arXiv Detail & Related papers (2022-02-21T21:50:50Z) - SWAG: A Wrapper Method for Sparse Learning [0.13854111346209866]
We propose a procedure to find a library of sparse learners with consequent low data collection and storage costs.
This new method delivers a low-dimensional network of attributes that can be easily interpreted.
We call this algorithm "Sparse Wrapper AlGorithm" (SWAG)
arXiv Detail & Related papers (2020-06-23T08:53:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.