Learning signatures of decision making from many individuals playing the
same game
- URL: http://arxiv.org/abs/2302.11023v1
- Date: Tue, 21 Feb 2023 21:41:53 GMT
- Title: Learning signatures of decision making from many individuals playing the
same game
- Authors: Michael J Mendelson, Mehdi Azabou, Suma Jacob, Nicola Grissom, David
Darrow, Becket Ebitz, Alexander Herman, Eva L. Dyer
- Abstract summary: We design a predictive framework that learns representations to encode an individual's 'behavioral style'
We apply our method to a large-scale behavioral dataset from 1,000 humans playing a 3-armed bandit task.
- Score: 54.33783158658077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human behavior is incredibly complex and the factors that drive decision
making--from instinct, to strategy, to biases between individuals--often vary
over multiple timescales. In this paper, we design a predictive framework that
learns representations to encode an individual's 'behavioral style', i.e.
long-term behavioral trends, while simultaneously predicting future actions and
choices. The model explicitly separates representations into three latent
spaces: the recent past space, the short-term space, and the long-term space
where we hope to capture individual differences. To simultaneously extract both
global and local variables from complex human behavior, our method combines a
multi-scale temporal convolutional network with latent prediction tasks, where
we encourage embeddings across the entire sequence, as well as subsets of the
sequence, to be mapped to similar points in the latent space. We develop and
apply our method to a large-scale behavioral dataset from 1,000 humans playing
a 3-armed bandit task, and analyze what our model's resulting embeddings reveal
about the human decision making process. In addition to predicting future
choices, we show that our model can learn rich representations of human
behavior over multiple timescales and provide signatures of differences in
individuals.
Related papers
- Evaluating Multiview Object Consistency in Humans and Image Models [68.36073530804296]
We leverage an experimental design from the cognitive sciences which requires zero-shot visual inferences about object shape.
We collect 35K trials of behavioral data from over 500 participants.
We then evaluate the performance of common vision models.
arXiv Detail & Related papers (2024-09-09T17:59:13Z) - Purposer: Putting Human Motion Generation in Context [30.706219830149504]
We present a novel method to generate human motion to populate 3D indoor scenes.
It can be controlled with various combinations of conditioning signals such as a path in a scene, target poses, past motions, and scenes represented as 3D point clouds.
arXiv Detail & Related papers (2024-04-19T15:16:04Z) - Social-Transmotion: Promptable Human Trajectory Prediction [65.80068316170613]
Social-Transmotion is a generic Transformer-based model that exploits diverse and numerous visual cues to predict human behavior.
Our approach is validated on multiple datasets, including JTA, JRDB, Pedestrians and Cyclists in Road Traffic, and ETH-UCY.
arXiv Detail & Related papers (2023-12-26T18:56:49Z) - AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable
Diffusion Model [69.12623428463573]
AlignDiff is a novel framework to quantify human preferences, covering abstractness, and guide diffusion planning.
It can accurately match user-customized behaviors and efficiently switch from one to another.
We demonstrate its superior performance on preference matching, switching, and covering compared to other baselines.
arXiv Detail & Related papers (2023-10-03T13:53:08Z) - Neural Novel Actor: Learning a Generalized Animatable Neural
Representation for Human Actors [98.24047528960406]
We propose a new method for learning a generalized animatable neural representation from a sparse set of multi-view imagery of multiple persons.
The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.
arXiv Detail & Related papers (2022-08-25T07:36:46Z) - Learning Behavior Representations Through Multi-Timescale Bootstrapping [8.543808476554695]
We introduce Bootstrap Across Multiple Scales (BAMS), a multi-scale representation learning model for behavior.
We first apply our method on a dataset of quadrupeds navigating in different terrain types, and show that our model captures the temporal complexity of behavior.
arXiv Detail & Related papers (2022-06-14T17:57:55Z) - Differentially Private Multivariate Time Series Forecasting of
Aggregated Human Mobility With Deep Learning: Input or Gradient Perturbation? [14.66445694852729]
This paper investigates the problem of forecasting multivariate aggregated human mobility while preserving the privacy of the individuals concerned.
Differential privacy, a state-of-the-art formal notion, has been used as the privacy guarantee in two different and independent steps when training deep learning models.
As shown in the results, differentially private deep learning models trained under gradient or input perturbation achieve nearly the same performance as non-private deep learning models.
arXiv Detail & Related papers (2022-05-01T10:11:04Z) - Multi-Person Extreme Motion Prediction with Cross-Interaction Attention [44.35977105396732]
Human motion prediction aims to forecast future human poses given a sequence of past 3D skeletons.
We assume that the input of our system are two sequences of past skeletons for two interacting persons.
We devise a novel cross interaction attention mechanism that learns to predict cross dependencies between self poses and the poses of the other person.
arXiv Detail & Related papers (2021-05-18T20:52:05Z) - CharacterGAN: Few-Shot Keypoint Character Animation and Reposing [64.19520387536741]
We introduce CharacterGAN, a generative model that can be trained on only a few samples of a given character.
Our model generates novel poses based on keypoint locations, which can be modified in real time while providing interactive feedback.
We show that our approach outperforms recent baselines and creates realistic animations for diverse characters.
arXiv Detail & Related papers (2021-02-05T12:38:15Z) - 3D Human motion anticipation and classification [8.069283749930594]
We propose a novel sequence-to-sequence model for human motion prediction and feature learning.
Our model learns to predict multiple future sequences of human poses from the same input sequence.
We show that it takes less than half the number of epochs to train an activity recognition network by using the feature learned from the discriminator.
arXiv Detail & Related papers (2020-12-31T00:19:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.