Plinko: A Theory-Free Behavioral Measure of Priors for Statistical
Learning and Mental Model Updating
- URL: http://arxiv.org/abs/2107.11477v1
- Date: Fri, 23 Jul 2021 22:27:30 GMT
- Title: Plinko: A Theory-Free Behavioral Measure of Priors for Statistical
Learning and Mental Model Updating
- Authors: Peter A. V. DiBerardino, Alexandre L. S. Filipowicz, James Danckert,
Britt Anderson
- Abstract summary: We present three experiments using "Plinko", a behavioral task in which participants estimate distributions of ball drops over all available outcomes.
We show that participant priors cluster around prototypical probability distributions and that prior cluster membership may indicate learning ability.
We verify that individual participant priors are reliable representations and that learning is not impeded when faced with a physically implausible ball drop distribution.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Probability distributions are central to Bayesian accounts of cognition, but
behavioral assessments do not directly measure them. Posterior distributions
are typically computed from collections of individual participant actions, yet
are used to draw conclusions about the internal structure of participant
beliefs. Also not explicitly measured are the prior distributions that
distinguish Bayesian models from others by representing initial states of
belief. Instead, priors are usually derived from experimenters' intuitions or
model assumptions and applied equally to all participants. Here we present
three experiments using "Plinko", a behavioral task in which participants
estimate distributions of ball drops over all available outcomes and where
distributions are explicitly measured before any observations. In Experiment 1,
we show that participant priors cluster around prototypical probability
distributions (Gaussian, bimodal, etc.), and that prior cluster membership may
indicate learning ability. In Experiment 2, we highlight participants' ability
to update to unannounced changes of presented distributions and how this
ability is affected by environmental manipulation. Finally, in Experiment 3, we
verify that individual participant priors are reliable representations and that
learning is not impeded when faced with a physically implausible ball drop
distribution that is dynamically defined according to individual participant
input. This task will prove useful in more closely examining mechanisms of
statistical learning and mental model updating without requiring many of the
assumptions made by more traditional computational modeling methodologies.
Related papers
- Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Intervention Generalization: A View from Factor Graph Models [7.117681268784223]
We take a close look at how to warrant a leap from past experiments to novel conditions based on minimal assumptions about the factorization of the distribution of the manipulated system.
A postulated $textitinterventional factor model$ (IFM) may not always be informative, but it conveniently abstracts away a need for explicitly modeling unmeasured confounding and feedback mechanisms.
arXiv Detail & Related papers (2023-06-06T21:44:23Z) - Learning and Predicting Multimodal Vehicle Action Distributions in a
Unified Probabilistic Model Without Labels [26.303522885475406]
We present a unified probabilistic model that learns a representative set of discrete vehicle actions and predicts the probability of each action given a particular scenario.
Our model also enables us to estimate the distribution over continuous trajectories conditioned on a scenario, representing what each discrete action would look like if executed in that scenario.
arXiv Detail & Related papers (2022-12-14T04:01:19Z) - Fairness Transferability Subject to Bounded Distribution Shift [5.62716254065607]
Given an algorithmic predictor that is "fair" on some source distribution, will it still be fair on an unknown target distribution that differs from the source within some bound?
We study the transferability of statistical group fairness for machine learning predictors subject to bounded distribution shifts.
arXiv Detail & Related papers (2022-05-31T22:16:44Z) - Characterizing the robustness of Bayesian adaptive experimental designs
to active learning bias [3.1351527202068445]
We show that active learning bias can afflict Bayesian adaptive experimental design, depending on model misspecification.
We develop an information-theoretic measure of misspecification, and show that worse misspecification implies more severe active learning bias.
arXiv Detail & Related papers (2022-05-27T01:23:11Z) - Agree to Disagree: Diversity through Disagreement for Better
Transferability [54.308327969778155]
We propose D-BAT (Diversity-By-disAgreement Training), which enforces agreement among the models on the training data.
We show how D-BAT naturally emerges from the notion of generalized discrepancy.
arXiv Detail & Related papers (2022-02-09T12:03:02Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - A Statistical Test for Probabilistic Fairness [11.95891442664266]
We propose a statistical hypothesis test for detecting unfair classifiers.
We show both theoretically as well as empirically that the proposed test is correct.
In addition, the proposed framework offers interpretability by identifying the most favorable perturbation of the data.
arXiv Detail & Related papers (2020-12-09T00:20:02Z) - Distributional Reinforcement Learning via Moment Matching [54.16108052278444]
We formulate a method that learns a finite set of statistics from each return distribution via neural networks.
Our method can be interpreted as implicitly matching all orders of moments between a return distribution and its Bellman target.
Experiments on the suite of Atari games show that our method outperforms the standard distributional RL baselines.
arXiv Detail & Related papers (2020-07-24T05:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.