Causal Estimation of User Learning in Personalized Systems
- URL: http://arxiv.org/abs/2306.00485v1
- Date: Thu, 1 Jun 2023 09:37:43 GMT
- Title: Causal Estimation of User Learning in Personalized Systems
- Authors: Evan Munro, David Jones, Jennifer Brennan, Roland Nelet, Vahab
Mirrokni, Jean Pouget-Abadie
- Abstract summary: We introduce a non-parametric causal model of user actions in a personalized system.
We show that the Cookie-Cookie-Day experiment, designed for the measurement of the user learning effect, is biased when there is personalization.
We derive new experimental designs that intervene in the personalization system to generate the variation necessary to separately identify the causal effect mediated through user learning and personalization.
- Score: 5.016998307223021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In online platforms, the impact of a treatment on an observed outcome may
change over time as 1) users learn about the intervention, and 2) the system
personalization, such as individualized recommendations, change over time. We
introduce a non-parametric causal model of user actions in a personalized
system. We show that the Cookie-Cookie-Day (CCD) experiment, designed for the
measurement of the user learning effect, is biased when there is
personalization. We derive new experimental designs that intervene in the
personalization system to generate the variation necessary to separately
identify the causal effect mediated through user learning and personalization.
Making parametric assumptions allows for the estimation of long-term causal
effects based on medium-term experiments. In simulations, we show that our new
designs successfully recover the dynamic causal effects of interest.
Related papers
- Accounting for Sycophancy in Language Model Uncertainty Estimation [28.08509288774144]
We study the relationship between sycophancy and uncertainty estimation for the first time.
We show that user confidence plays a critical role in modulating the effects of sycophancy.
We argue that externalizing both model and user uncertainty can help to mitigate the impacts of sycophancy bias.
arXiv Detail & Related papers (2024-10-17T18:00:25Z) - Evaluating Alternative Training Interventions Using Personalized Computational Models of Learning [0.0]
evaluating different training interventions to determine which produce the best learning outcomes is one of the main challenges faced by instructional designers.
We present an approach for automatically tuning models to specific individuals and show that personalized models make better predictions of students' behavior than generic ones.
Our approach makes predictions that align with previous human findings, as well as testable predictions that might be evaluated with future human experiments.
arXiv Detail & Related papers (2024-08-24T22:51:57Z) - Dual Test-time Training for Out-of-distribution Recommender System [91.15209066874694]
We propose a novel Dual Test-Time-Training framework for OOD Recommendation, termed DT3OR.
In DT3OR, we incorporate a model adaptation mechanism during the test-time phase to carefully update the recommendation model.
To the best of our knowledge, this paper is the first work to address OOD recommendation via a test-time-training strategy.
arXiv Detail & Related papers (2024-07-22T13:27:51Z) - POV Learning: Individual Alignment of Multimodal Models using Human Perception [1.4796543791607086]
We argue that alignment on an individual level can boost the subjective predictive performance for the individual user interacting with the system.
We test this, by integrating perception information into machine learning systems and measuring their predictive performance.
Our findings suggest that exploiting individual perception signals for the machine learning of subjective human assessments provides a valuable cue for individual alignment.
arXiv Detail & Related papers (2024-05-07T16:07:29Z) - DOMINO: Visual Causal Reasoning with Time-Dependent Phenomena [59.291745595756346]
We propose a set of visual analytics methods that allow humans to participate in the discovery of causal relations associated with windows of time delay.
Specifically, we leverage a well-established method, logic-based causality, to enable analysts to test the significance of potential causes.
Since an effect can be a cause of other effects, we allow users to aggregate different temporal cause-effect relations found with our method into a visual flow diagram.
arXiv Detail & Related papers (2023-03-12T03:40:21Z) - Zero-shot causal learning [64.9368337542558]
CaML is a causal meta-learning framework which formulates the personalized prediction of each intervention's effect as a task.
We show that CaML is able to predict the personalized effects of novel interventions that do not exist at the time of training.
arXiv Detail & Related papers (2023-01-28T20:14:11Z) - Fair Effect Attribution in Parallel Online Experiments [57.13281584606437]
A/B tests serve the purpose of reliably identifying the effect of changes introduced in online services.
It is common for online platforms to run a large number of simultaneous experiments by splitting incoming user traffic randomly.
Despite a perfect randomization between different groups, simultaneous experiments can interact with each other and create a negative impact on average population outcomes.
arXiv Detail & Related papers (2022-10-15T17:15:51Z) - Personalizing Intervened Network for Long-tailed Sequential User
Behavior Modeling [66.02953670238647]
Tail users suffer from significantly lower-quality recommendation than the head users after joint training.
A model trained on tail users separately still achieve inferior results due to limited data.
We propose a novel approach that significantly improves the recommendation performance of the tail users.
arXiv Detail & Related papers (2022-08-19T02:50:19Z) - Learning Transferrable Parameters for Long-tailed Sequential User
Behavior Modeling [70.64257515361972]
We argue that focusing on tail users could bring more benefits and address the long tails issue.
Specifically, we propose a gradient alignment and adopt an adversarial training scheme to facilitate knowledge transfer from the head to the tail.
arXiv Detail & Related papers (2020-10-22T03:12:02Z) - Soliciting Human-in-the-Loop User Feedback for Interactive Machine
Learning Reduces User Trust and Impressions of Model Accuracy [8.11839312231511]
Mixed-initiative systems allow users to interactively provide feedback to improve system performance.
Our research investigates how the act of providing feedback can affect user understanding of an intelligent system and its accuracy.
arXiv Detail & Related papers (2020-08-28T16:46:41Z) - Differentially Private ERM Based on Data Perturbation [41.37436071802578]
We measure the contributions of various training data instances on the final machine learning model.
Considering that the key of our method is to measure each data instance separately, we propose a new Data perturbation' based (DB) paradigm for DP-ERM.
arXiv Detail & Related papers (2020-02-20T06:05:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.