Exploring Navigation Styles in a FutureLearn MOOC
- URL: http://arxiv.org/abs/2008.04373v1
- Date: Mon, 10 Aug 2020 19:12:21 GMT
- Title: Exploring Navigation Styles in a FutureLearn MOOC
- Authors: Lei Shi, Alexandra I. Cristea, Armando M. Toda, Wilk Oliveira
- Abstract summary: This paper presents for the first time a detailed analysis of fine-grained navigation style identification in MOOCs backed by a large number of active learners.
It provides insight into online learners' temporal engagement, as well as a tool to identify vulnerable learners.
- Score: 61.58283466715385
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents for the first time a detailed analysis of fine-grained
navigation style identification in MOOCs backed by a large number of active
learners. The result shows 1) whilst the sequential style is clearly in
evidence, the global style is less prominent; 2) the majority of the learners
do not belong to either category; 3) navigation styles are not as stable as
believed in the literature; and 4) learners can, and do, swap between
navigation styles with detrimental effects. The approach is promising, as it
provides insight into online learners' temporal engagement, as well as a tool
to identify vulnerable learners, which potentially benefit personalised
interventions (from teachers or automatic help) in Intelligent Tutoring Systems
(ITS).
Related papers
- A Role of Environmental Complexity on Representation Learning in Deep Reinforcement Learning Agents [3.7314353481448337]
We developed a simulated navigation environment to train deep reinforcement learning agents.
We modulated the frequency of exposure to a shortcut and navigation cue, leading to the development of artificial agents with differing abilities.
We examined the encoded representations in artificial neural networks driving these agents, revealing intricate dynamics in representation learning.
arXiv Detail & Related papers (2024-07-03T18:27:26Z) - Two-Stage Depth Enhanced Learning with Obstacle Map For Object Navigation [11.667940255053582]
This paper uses the RGB and depth information of the training scene to pretrain the feature extractor, which improves navigation efficiency.
We evaluated our method on AI2-Thor and RoboTHOR and demonstrated that it significantly outperforms state-of-the-art (SOTA) methods on success rate and navigation efficiency.
arXiv Detail & Related papers (2024-06-20T08:35:10Z) - Learning Navigational Visual Representations with Semantic Map
Supervision [85.91625020847358]
We propose a navigational-specific visual representation learning method by contrasting the agent's egocentric views and semantic maps.
Ego$2$-Map learning transfers the compact and rich information from a map, such as objects, structure and transition, to the agent's egocentric representations for navigation.
arXiv Detail & Related papers (2023-07-23T14:01:05Z) - Learning to Predict Navigational Patterns from Partial Observations [63.04492958425066]
This paper presents the first self-supervised learning (SSL) method for learning to infer navigational patterns in real-world environments from partial observations only.
We demonstrate how to infer global navigational patterns by fitting a maximum likelihood graph to the DSLP field.
Experiments show that our SSL model outperforms two SOTA supervised lane graph prediction models on the nuScenes dataset.
arXiv Detail & Related papers (2023-04-26T02:08:46Z) - Predicting students' learning styles using regression techniques [0.4125187280299248]
Online learning requires a personalization method because the interaction between learners and instructors is minimal.
One of the personalization methods is detecting the learners' learning style.
Current detection models become ineffective when learners have no dominant style or a mix of learning styles.
arXiv Detail & Related papers (2022-09-12T16:04:51Z) - Online No-regret Model-Based Meta RL for Personalized Navigation [37.82017324353145]
We propose an online no-regret model-based RL method that quickly conforms to the dynamics of the current user.
Our theoretical analysis shows that our method is a no-regret algorithm and we provide the convergence rate in the agnostic setting.
Our empirical analysis with 60+ hours of real-world user data shows that our method can reduce the number of collisions by more than 60%.
arXiv Detail & Related papers (2022-04-05T01:28:06Z) - Adversarial Reinforced Instruction Attacker for Robust Vision-Language
Navigation [145.84123197129298]
Language instruction plays an essential role in the natural language grounded navigation tasks.
We exploit to train a more robust navigator which is capable of dynamically extracting crucial factors from the long instruction.
Specifically, we propose a Dynamic Reinforced Instruction Attacker (DR-Attacker), which learns to mislead the navigator to move to the wrong target.
arXiv Detail & Related papers (2021-07-23T14:11:31Z) - Deep Learning for Embodied Vision Navigation: A Survey [108.13766213265069]
"Embodied visual navigation" problem requires an agent to navigate in a 3D environment mainly rely on its first-person observation.
This paper attempts to establish an outline of the current works in the field of embodied visual navigation by providing a comprehensive literature survey.
arXiv Detail & Related papers (2021-07-07T12:09:04Z) - Active Visual Information Gathering for Vision-Language Navigation [115.40768457718325]
Vision-language navigation (VLN) is the task of entailing an agent to carry out navigational instructions inside photo-realistic environments.
One of the key challenges in VLN is how to conduct a robust navigation by mitigating the uncertainty caused by ambiguous instructions and insufficient observation of the environment.
This work draws inspiration from human navigation behavior and endows an agent with an active information gathering ability for a more intelligent VLN policy.
arXiv Detail & Related papers (2020-07-15T23:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.