Large-scale randomized experiment reveals machine learning helps people
learn and remember more effectively
- URL: http://arxiv.org/abs/2010.04430v1
- Date: Fri, 9 Oct 2020 08:30:15 GMT
- Title: Large-scale randomized experiment reveals machine learning helps people
learn and remember more effectively
- Authors: Utkarsh Upadhyay and Graham Lancashire and Christoph Moser and Manuel
Gomez-Rodriguez
- Abstract summary: We focus on unveiling the potential of machine learning to improve how people learn and remember factual material.
We perform a large-scale randomized controlled trial with thousands of learners from a popular learning app in the area of mobility.
- Score: 14.305363222114543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning has typically focused on developing models and algorithms
that would ultimately replace humans at tasks where intelligence is required.
In this work, rather than replacing humans, we focus on unveiling the potential
of machine learning to improve how people learn and remember factual material.
To this end, we perform a large-scale randomized controlled trial with
thousands of learners from a popular learning app in the area of mobility.
After controlling for the length and frequency of study, we find that learners
whose study sessions are optimized using machine learning remember the content
over $\sim$67% longer than those whose study sessions are generated using two
alternative heuristics. Our randomized controlled trial also reveals that the
learners whose study sessions are optimized using machine learning are
$\sim$50% more likely to return to the app within 4-7 days.
Related papers
- SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - How Do Human Users Teach a Continual Learning Robot in Repeated
Interactions? [7.193217430660011]
We take a human-centered approach to continual learning, to understand how humans teach continual learning robots over the long term.
We conducted an in-person study with 40 participants that interacted with a continual learning robot in 200 sessions.
An extensive qualitative and quantitative analysis of the data collected in the study shows that there is significant variation among the teaching styles of individual users.
arXiv Detail & Related papers (2023-06-30T20:29:48Z) - Don't Start From Scratch: Leveraging Prior Data to Automate Robotic
Reinforcement Learning [70.70104870417784]
Reinforcement learning (RL) algorithms hold the promise of enabling autonomous skill acquisition for robotic systems.
In practice, real-world robotic RL typically requires time consuming data collection and frequent human intervention to reset the environment.
In this work, we study how these challenges can be tackled by effective utilization of diverse offline datasets collected from previously seen tasks.
arXiv Detail & Related papers (2022-07-11T08:31:22Z) - Classifying Human Activities using Machine Learning and Deep Learning
Techniques [0.0]
Human Activity Recognition (HAR) describes the machines ability to recognize human actions.
Challenge in HAR is to overcome the difficulties of separating human activities based on the given data.
Deep Learning techniques like Long Short-Term Memory (LSTM), Bi-Directional LS classifier, Recurrent Neural Network (RNN), and Gated Recurrent Unit (GRU) are trained.
Experiment results proved that the Linear Support Vector in machine learning and Gated Recurrent Unit in Deep Learning provided better accuracy for human activity recognition.
arXiv Detail & Related papers (2022-05-19T05:20:04Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - Intuitiveness in Active Teaching [7.8029610421817654]
We analyze intuitiveness of certain algorithms when they are actively taught by users.
We offer a systematic method to judge the efficacy of human-machine interactions.
arXiv Detail & Related papers (2020-12-25T09:31:56Z) - A Study on Efficiency in Continual Learning Inspired by Human Learning [24.78879710005838]
We study the efficiency of continual learning systems, taking inspiration from human learning.
In particular, inspired by the mechanisms of sleep, we evaluate popular pruning-based continual learning algorithms.
arXiv Detail & Related papers (2020-10-28T19:11:01Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z) - Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic
Reinforcement Learning [109.77163932886413]
We show how to adapt vision-based robotic manipulation policies to new variations by fine-tuning via off-policy reinforcement learning.
This adaptation uses less than 0.2% of the data necessary to learn the task from scratch.
We find that our approach of adapting pre-trained policies leads to substantial performance gains over the course of fine-tuning.
arXiv Detail & Related papers (2020-04-21T17:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.