Improved Performances and Motivation in Intelligent Tutoring Systems:
Combining Machine Learning and Learner Choice
- URL: http://arxiv.org/abs/2402.01669v1
- Date: Tue, 16 Jan 2024 13:41:00 GMT
- Title: Improved Performances and Motivation in Intelligent Tutoring Systems:
Combining Machine Learning and Learner Choice
- Authors: Benjamin Cl\'ement (1 adn 3), H\'el\`ene Sauz\'eon (1 and 2), Didier
Roy (1), Pierre-Yves Oudeyer (1) ((1) Inria FLOWERS team Talence France, (2)
Universit\'e de Bordeaux BPH lab Bordeaux France, (3) EvidenceB Paris France)
- Abstract summary: We show that the addition of choice triggers intrinsic motivation and reinforces the learning effectiveness of the LP-based personalization.
We show that the intrinsic motivation elicited by a playful feature is beneficial only if the curriculum personalization is effective for the learner.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large class sizes pose challenges to personalized learning in schools, which
educational technologies, especially intelligent tutoring systems (ITS), aim to
address. In this context, the ZPDES algorithm, based on the Learning Progress
Hypothesis (LPH) and multi-armed bandit machine learning techniques, sequences
exercises that maximize learning progress (LP). This algorithm was previously
shown in field studies to boost learning performances for a wider diversity of
students compared to a hand-designed curriculum. However, its motivational
impact was not assessed. Also, ZPDES did not allow students to express choices.
This limitation in agency is at odds with the LPH theory concerned with
modeling curiosity-driven learning. We here study how the introduction of such
choice possibilities impact both learning efficiency and motivation. The given
choice concerns dimensions that are orthogonal to exercise difficulty, acting
as a playful feature.
In an extensive field study (265 7-8 years old children, RCT design), we
compare systems based either on ZPDES or a hand-designed curriculum, both with
and without self-choice. We first show that ZPDES improves learning performance
and produces a positive and motivating learning experience. We then show that
the addition of choice triggers intrinsic motivation and reinforces the
learning effectiveness of the LP-based personalization. In doing so, it
strengthens the links between intrinsic motivation and performance progress
during the serious game. Conversely, deleterious effects of the playful feature
are observed for hand-designed linear paths. Thus, the intrinsic motivation
elicited by a playful feature is beneficial only if the curriculum
personalization is effective for the learner. Such a result deserves great
attention due to increased use of playful features in non adaptive educational
technologies.
Related papers
- SELFI: Autonomous Self-Improvement with Reinforcement Learning for Social Navigation [54.97931304488993]
Self-improving robots that interact and improve with experience are key to the real-world deployment of robotic systems.
We propose an online learning method, SELFI, that leverages online robot experience to rapidly fine-tune pre-trained control policies.
We report improvements in terms of collision avoidance, as well as more socially compliant behavior, measured by a human user study.
arXiv Detail & Related papers (2024-03-01T21:27:03Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Towards Scalable Adaptive Learning with Graph Neural Networks and
Reinforcement Learning [0.0]
We introduce a flexible and scalable approach towards the problem of learning path personalization.
Our model is a sequential recommender system based on a graph neural network.
Our results demonstrate that it can learn to make good recommendations in the small-data regime.
arXiv Detail & Related papers (2023-05-10T18:16:04Z) - Reinforcement Learning Tutor Better Supported Lower Performers in a Math
Task [32.6507926764587]
Reinforcement learning could be a key tool to reduce the development cost and improve the effectiveness of intelligent tutoring software.
We show that deep reinforcement learning can be used to provide adaptive pedagogical support to students learning about the concept of volume.
arXiv Detail & Related papers (2023-04-11T02:11:24Z) - Basis for Intentions: Efficient Inverse Reinforcement Learning using
Past Experience [89.30876995059168]
inverse reinforcement learning (IRL) -- inferring the reward function of an agent from observing its behavior.
This paper addresses the problem of IRL -- inferring the reward function of an agent from observing its behavior.
arXiv Detail & Related papers (2022-08-09T17:29:49Z) - RvS: What is Essential for Offline RL via Supervised Learning? [77.91045677562802]
Recent work has shown that supervised learning alone, without temporal difference (TD) learning, can be remarkably effective for offline RL.
In every environment suite we consider simply maximizing likelihood with two-layer feedforward is competitive.
They also probe the limits of existing RvS methods, which are comparatively weak on random data.
arXiv Detail & Related papers (2021-12-20T18:55:16Z) - Student sentiment Analysis Using Classification With Feature Extraction
Techniques [0.0]
This paper describes the web-based learning and their effectiveness towards students.
We worked on how machine learning techniques like Logistic Regression (LR), Support Vector Machine (SVM), Naive Bayes (NB), Decision Tree (DT)
arXiv Detail & Related papers (2021-02-01T18:48:06Z) - Personalized Education in the AI Era: What to Expect Next? [76.37000521334585]
The objective of personalized learning is to design an effective knowledge acquisition track that matches the learner's strengths and bypasses her weaknesses to meet her desired goal.
In recent years, the boost of artificial intelligence (AI) and machine learning (ML) has unfolded novel perspectives to enhance personalized education.
arXiv Detail & Related papers (2021-01-19T12:23:32Z) - Selective Particle Attention: Visual Feature-Based Attention in Deep
Reinforcement Learning [0.0]
We focus on one particular form of visual attention known as feature-based attention.
Visual feature-based attention has been proposed to improve the efficiency of Reinforcement Learning.
We propose a novel algorithm, termed Selective Particle Attention (SPA), which imbues a Deep RL agent with the ability to perform selective feature-based attention.
arXiv Detail & Related papers (2020-08-26T11:07:50Z) - Choose Your Own Question: Encouraging Self-Personalization in Learning
Path Construction [1.6505359493498744]
We introduce Rocket, a Tinder-like User Interface for a general class of Interactive Educational System (IES)s.
Rocket provides a visual representation of Artificial Intelligence (AI)-extracted features of learning materials, allowing the student to quickly decide whether the material meets their needs.
Rocket enables self-personalization of the learning experience by leveraging the students' knowledge of their own abilities and needs.
arXiv Detail & Related papers (2020-05-08T01:53:04Z) - Emergent Real-World Robotic Skills via Unsupervised Off-Policy
Reinforcement Learning [81.12201426668894]
We develop efficient reinforcement learning methods that acquire diverse skills without any reward function, and then repurpose these skills for downstream tasks.
We show that our proposed algorithm provides substantial improvement in learning efficiency, making reward-free real-world training feasible.
We also demonstrate that the learned skills can be composed using model predictive control for goal-oriented navigation, without any additional training.
arXiv Detail & Related papers (2020-04-27T17:38:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.