Learning to Represent Individual Differences for Choice Decision Making
- URL: http://arxiv.org/abs/2503.21704v1
- Date: Thu, 27 Mar 2025 17:10:05 GMT
- Title: Learning to Represent Individual Differences for Choice Decision Making
- Authors: Yan-Ying Chen, Yue Weng, Alexandre Filipowicz, Rumen Iliev, Francine Chen, Shabnam Hakimi, Yanxia Zhang, Matthew Lee, Kent Lyons, Charlene Wu,
- Abstract summary: We use representation learning to characterize individual differences in human performance on an economic decision-making task.<n>We demonstrate that models using representation learning to capture individual differences consistently improve decision predictions.<n>Our results propose that representation learning offers a useful and flexible tool to capture individual differences.
- Score: 37.97312716637515
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human decision making can be challenging to predict because decisions are affected by a number of complex factors. Adding to this complexity, decision-making processes can differ considerably between individuals, and methods aimed at predicting human decisions need to take individual differences into account. Behavioral science offers methods by which to measure individual differences (e.g., questionnaires, behavioral models), but these are often narrowed down to low dimensions and not tailored to specific prediction tasks. This paper investigates the use of representation learning to measure individual differences from behavioral experiment data. Representation learning offers a flexible approach to create individual embeddings from data that are both structured (e.g., demographic information) and unstructured (e.g., free text), where the flexibility provides more options for individual difference measures for personalization, e.g., free text responses may allow for open-ended questions that are less privacy-sensitive. In the current paper we use representation learning to characterize individual differences in human performance on an economic decision-making task. We demonstrate that models using representation learning to capture individual differences consistently improve decision predictions over models without representation learning, and even outperform well-known theory-based behavioral models used in these environments. Our results propose that representation learning offers a useful and flexible tool to capture individual differences.
Related papers
- Learning under Imitative Strategic Behavior with Unforeseeable Outcomes [14.80947863438795]
We propose a Stackelberg game to model the interplay between individuals and the decision-maker.
We show that the objective difference between the two can be decomposed into three interpretable terms.
arXiv Detail & Related papers (2024-05-03T00:53:58Z) - Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting [64.80538055623842]
sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
arXiv Detail & Related papers (2023-09-13T15:42:06Z) - Quantifying Consistency and Information Loss for Causal Abstraction
Learning [16.17846886492361]
We introduce a family of interventional measures that an agent may use to evaluate such a trade-off.
We consider four measures suited for different tasks, analyze their properties, and propose algorithms to evaluate and learn causal abstractions.
arXiv Detail & Related papers (2023-05-07T19:10:28Z) - Learning Human-Compatible Representations for Case-Based Decision
Support [36.01560961898229]
Algorithmic case-based decision support provides examples to help human make sense of predicted labels.
Human-compatible representations identify nearest neighbors that are perceived as more similar by humans.
arXiv Detail & Related papers (2023-03-06T19:04:26Z) - Learning signatures of decision making from many individuals playing the
same game [54.33783158658077]
We design a predictive framework that learns representations to encode an individual's 'behavioral style'
We apply our method to a large-scale behavioral dataset from 1,000 humans playing a 3-armed bandit task.
arXiv Detail & Related papers (2023-02-21T21:41:53Z) - Explainability's Gain is Optimality's Loss? -- How Explanations Bias
Decision-making [0.0]
Explanations help to facilitate communication between the algorithm and the human decision-maker.
Feature-based explanations' semantics of causal models induce leakage from the decision-maker's prior beliefs.
Such differences can lead to sub-optimal and biased decision outcomes.
arXiv Detail & Related papers (2022-06-17T11:43:42Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - Model-agnostic Fits for Understanding Information Seeking Patterns in
Humans [0.0]
In decision making tasks under uncertainty, humans display characteristic biases in seeking, integrating, and acting upon information relevant to the task.
Here, we reexamine data from previous carefully designed experiments, collected at scale, that measured and catalogued these biases in aggregate form.
We design deep learning models that replicate these biases in aggregate, while also capturing individual variation in behavior.
arXiv Detail & Related papers (2020-12-09T04:34:58Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.