A Framework to Counteract Suboptimal User-Behaviors in Exploratory
Learning Environments: an Application to MOOCs
- URL: http://arxiv.org/abs/2106.07555v1
- Date: Mon, 14 Jun 2021 16:16:33 GMT
- Title: A Framework to Counteract Suboptimal User-Behaviors in Exploratory
Learning Environments: an Application to MOOCs
- Authors: S\'ebastien Lall\'e and Cristina Conati
- Abstract summary: We focus on a data-driven user-modeling framework that uses logged interaction data to learn which behavioral or activity patterns should trigger help.
We present a novel application of this framework to Massive Open Online Courses (MOOCs), a form of exploratory environment.
- Score: 1.1421942894219896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While there is evidence that user-adaptive support can greatly enhance the
effectiveness of educational systems, designing such support for exploratory
learning environments (e.g., simulations) is still challenging due to the
open-ended nature of their interaction. In particular, there is little a priori
knowledge of which student's behaviors can be detrimental to learning in such
environments. To address this problem, we focus on a data-driven user-modeling
framework that uses logged interaction data to learn which behavioral or
activity patterns should trigger help during interaction with a specific
learning environment. This framework has been successfully used to provide
adaptive support in interactive learning simulations. Here we present a novel
application of this framework we are working on, namely to Massive Open Online
Courses (MOOCs), a form of exploratory environment that could greatly benefit
from adaptive support due to the large diversity of their users, but typically
lack of such adaptation. We describe an experiment aimed at investigating the
value of our framework to identify student's behaviors that can justify
adapting to, and report some preliminary results.
Related papers
- Demonstrating the Continual Learning Capabilities and Practical Application of Discrete-Time Active Inference [0.0]
Active inference is a mathematical framework for understanding how agents interact with their environments.
In this paper, we present a continual learning framework for agents operating in discrete time environments.
We demonstrate the agent's ability to relearn and refine its models efficiently, making it suitable for complex domains like finance and healthcare.
arXiv Detail & Related papers (2024-09-30T21:18:46Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Learning from Interaction: User Interface Adaptation using Reinforcement
Learning [0.0]
This thesis proposes an RL-based UI adaptation framework that uses physiological data.
The framework aims to learn from user interactions and make informed adaptations to improve user experience (UX)
arXiv Detail & Related papers (2023-12-12T12:29:18Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Reinforcement Learning in Education: A Multi-Armed Bandit Approach [12.358921226358133]
Reinforcement leaning solves unsupervised problems where agents move through a state-action-reward loop to maximize the overall reward for the agent.
The aim of this study was to contextualise and simulate the cumulative reward within an environment for an intervention recommendation problem in the education context.
arXiv Detail & Related papers (2022-11-01T22:47:17Z) - Environment Design for Inverse Reinforcement Learning [3.085995273374333]
Current inverse reinforcement learning methods that focus on learning from a single environment can fail to handle slight changes in the environment dynamics.
In our framework, the learner repeatedly interacts with the expert, with the former selecting environments to identify the reward function.
This results in improvements in both sample-efficiency and robustness, as we show experimentally, for both exact and approximate inference.
arXiv Detail & Related papers (2022-10-26T18:31:17Z) - Learning Self-Modulating Attention in Continuous Time Space with
Applications to Sequential Recommendation [102.24108167002252]
We propose a novel attention network, named self-modulating attention, that models the complex and non-linearly evolving dynamic user preferences.
We empirically demonstrate the effectiveness of our method on top-N sequential recommendation tasks, and the results on three large-scale real-world datasets show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-30T03:54:11Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - A Unified Cognitive Learning Framework for Adapting to Dynamic
Environment and Tasks [19.459770316922437]
We propose a unified cognitive learning (CL) framework for the dynamic wireless environment and tasks.
We show that our proposed CL framework has three advantages, namely, the capability of adapting to the dynamic environment and tasks, the self-learning capability and the capability of 'good money driving out bad money' by taking modulation recognition as an example.
arXiv Detail & Related papers (2021-06-01T14:08:20Z) - Generative Adversarial Reward Learning for Generalized Behavior Tendency
Inference [71.11416263370823]
We propose a generative inverse reinforcement learning for user behavioral preference modelling.
Our model can automatically learn the rewards from user's actions based on discriminative actor-critic network and Wasserstein GAN.
arXiv Detail & Related papers (2021-05-03T13:14:25Z) - Behavior Priors for Efficient Reinforcement Learning [97.81587970962232]
We consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors.
We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives.
We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains.
arXiv Detail & Related papers (2020-10-27T13:17:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.