Using Adaptive Experiments to Rapidly Help Students
- URL: http://arxiv.org/abs/2208.05092v1
- Date: Wed, 10 Aug 2022 00:43:05 GMT
- Title: Using Adaptive Experiments to Rapidly Help Students
- Authors: Angela Zavaleta-Bernuy, Qi Yin Zheng, Hammad Shaikh, Jacob Nogas, Anna
Rafferty, Andrew Petersen, Joseph Jay Williams
- Abstract summary: We evaluate the effect of homework email reminders in students by conducting an adaptive experiment using the Thompson Sampling algorithm.
We raise a range of open questions about the conditions under which adaptive randomized experiments may be more or less useful.
- Score: 5.446351709118483
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adaptive experiments can increase the chance that current students obtain
better outcomes from a field experiment of an instructional intervention. In
such experiments, the probability of assigning students to conditions changes
while more data is being collected, so students can be assigned to
interventions that are likely to perform better. Digital educational
environments lower the barrier to conducting such adaptive experiments, but
they are rarely applied in education. One reason might be that researchers have
access to few real-world case studies that illustrate the advantages and
disadvantages of these experiments in a specific context. We evaluate the
effect of homework email reminders in students by conducting an adaptive
experiment using the Thompson Sampling algorithm and compare it to a
traditional uniform random experiment. We present this as a case study on how
to conduct such experiments, and we raise a range of open questions about the
conditions under which adaptive randomized experiments may be more or less
useful.
Related papers
- Adaptive Experimentation When You Can't Experiment [55.86593195947978]
This paper introduces the emphconfounded pure exploration transductive linear bandit (textttCPET-LB) problem.
Online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment.
arXiv Detail & Related papers (2024-06-15T20:54:48Z) - Optimizing Adaptive Experiments: A Unified Approach to Regret Minimization and Best-Arm Identification [9.030753181146176]
We propose a unified model that simultaneously accounts for within-experiment performance and post-experiment outcomes.
We show that substantial reductions in experiment duration can often be achieved with minimal impact on both within-experiment and post-experiment regret.
arXiv Detail & Related papers (2024-02-16T11:27:48Z) - Adaptive Instrument Design for Indirect Experiments [48.815194906471405]
Unlike RCTs, indirect experiments estimate treatment effects by leveragingconditional instrumental variables.
In this paper we take the initial steps towards enhancing sample efficiency for indirect experiments by adaptively designing a data collection policy.
Our main contribution is a practical computational procedure that utilizes influence functions to search for an optimal data collection policy.
arXiv Detail & Related papers (2023-12-05T02:38:04Z) - Opportunities for Adaptive Experiments to Enable Continuous Improvement in Computer Science Education [7.50867730317249]
In adaptive experiments, data is analyzed and utilized as different conditions are deployed to students.
These algorithms can then dynamically deploy the most effective conditions in subsequent interactions with students.
This work paves the way for exploring the importance of adaptive experiments in bridging research and practice to achieve continuous improvement.
arXiv Detail & Related papers (2023-10-18T20:54:59Z) - Optimal tests following sequential experiments [0.0]
The purpose of this paper is to aid in the development of optimal tests for sequential experiments by analyzing their properties.
Our key finding is that the power function of any test can be matched by a test in a limit experiment.
This result has important implications, including a powerful sufficiency result.
arXiv Detail & Related papers (2023-04-30T06:09:49Z) - GFlowNets for AI-Driven Scientific Discovery [74.27219800878304]
We present a new probabilistic machine learning framework called GFlowNets.
GFlowNets can be applied in the modeling, hypotheses generation and experimental design stages of the experimental science loop.
We argue that GFlowNets can become a valuable tool for AI-driven scientific discovery.
arXiv Detail & Related papers (2023-02-01T17:29:43Z) - PyExperimenter: Easily distribute experiments and track results [63.871474825689134]
PyExperimenter is a tool to facilitate the setup, documentation, execution, and subsequent evaluation of results from an empirical study of algorithms.
It is intended to be used by researchers in the field of artificial intelligence, but is not limited to those.
arXiv Detail & Related papers (2023-01-16T10:43:02Z) - Assign Experiment Variants at Scale in Online Controlled Experiments [1.9205538784019935]
Online controlled experiments (A/B tests) have become the gold standard for learning the impact of new product features in technology companies.
Technology companies run A/B tests at scale -- hundreds if not thousands of A/B tests concurrently, each with millions of users.
We present a novel assignment algorithm and statistical tests to validate the randomized assignments.
arXiv Detail & Related papers (2022-12-17T00:45:12Z) - Fair Effect Attribution in Parallel Online Experiments [57.13281584606437]
A/B tests serve the purpose of reliably identifying the effect of changes introduced in online services.
It is common for online platforms to run a large number of simultaneous experiments by splitting incoming user traffic randomly.
Despite a perfect randomization between different groups, simultaneous experiments can interact with each other and create a negative impact on average population outcomes.
arXiv Detail & Related papers (2022-10-15T17:15:51Z) - Increasing Students' Engagement to Reminder Emails Through Multi-Armed
Bandits [60.4933541247257]
This paper shows a real-world adaptive experiment on how students engage with instructors' weekly email reminders to build their time management habits.
Using Multi-Armed Bandits (MAB) algorithms in adaptive experiments can increase students' chances of obtaining better outcomes.
We highlight problems with these adaptive algorithms - such as possible exploitation of an arm when there is no significant difference.
arXiv Detail & Related papers (2022-08-10T00:30:52Z) - Challenges in Statistical Analysis of Data Collected by a Bandit
Algorithm: An Empirical Exploration in Applications to Adaptively Randomized
Experiments [11.464963616709671]
Multi-armed bandit algorithms have been argued for decades as useful for adaptively randomized experiments.
We applied the bandit algorithm Thompson Sampling (TS) to run adaptive experiments in three university classes.
We show that collecting data with TS can as much as double the False Positive Rate (FPR) and the False Negative Rate (FNR)
arXiv Detail & Related papers (2021-03-22T22:05:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.