Response Time Improves Choice Prediction and Function Estimation for
Gaussian Process Models of Perception and Preferences
- URL: http://arxiv.org/abs/2306.06296v1
- Date: Fri, 9 Jun 2023 23:22:49 GMT
- Title: Response Time Improves Choice Prediction and Function Estimation for
Gaussian Process Models of Perception and Preferences
- Authors: Michael Shvartsman, Benjamin Letham, Stephen Keeley
- Abstract summary: Models for human choice prediction in preference learning and psychophysics often consider only binary response data.
We propose a novel differentiable approximation to the diffusion decision model (DDM) likelihood.
We then use this new likelihood to incorporate RTs into GP models for binary choices.
- Score: 4.6584146134061095
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Models for human choice prediction in preference learning and psychophysics
often consider only binary response data, requiring many samples to accurately
learn preferences or perceptual detection thresholds. The response time (RT) to
make each choice captures additional information about the decision process,
however existing models incorporating RTs for choice prediction do so in fully
parametric settings or over discrete stimulus sets. This is in part because the
de-facto standard model for choice RTs, the diffusion decision model (DDM),
does not admit tractable, differentiable inference. The DDM thus cannot be
easily integrated with flexible models for continuous, multivariate function
approximation, particularly Gaussian process (GP) models. We propose a novel
differentiable approximation to the DDM likelihood using a family of known,
skewed three-parameter distributions. We then use this new likelihood to
incorporate RTs into GP models for binary choices. Our RT-choice GPs enable
both better latent value estimation and held-out choice prediction relative to
baselines, which we demonstrate on three real-world multivariate datasets
covering both human psychophysics and preference learning applications.
Related papers
- Computation-Aware Gaussian Processes: Model Selection And Linear-Time Inference [55.150117654242706]
We show that model selection for computation-aware GPs trained on 1.8 million data points can be done within a few hours on a single GPU.
As a result of this work, Gaussian processes can be trained on large-scale datasets without significantly compromising their ability to quantify uncertainty.
arXiv Detail & Related papers (2024-11-01T21:11:48Z) - Diffusion models for probabilistic programming [56.47577824219207]
Diffusion Model Variational Inference (DMVI) is a novel method for automated approximate inference in probabilistic programming languages (PPLs)
DMVI is easy to implement, allows hassle-free inference in PPLs without the drawbacks of, e.g., variational inference using normalizing flows, and does not make any constraints on the underlying neural network model.
arXiv Detail & Related papers (2023-11-01T12:17:05Z) - DF2: Distribution-Free Decision-Focused Learning [53.2476224456902]
Decision-focused learning (DFL) has recently emerged as a powerful approach for predictthen-optimize problems.
Existing end-to-end DFL methods are hindered by three significant bottlenecks: model error, sample average approximation error, and distribution-based parameterization of the expected objective.
We present DF2 -- the first textit-free decision-focused learning method explicitly designed to address these three bottlenecks.
arXiv Detail & Related papers (2023-08-11T00:44:46Z) - Latent Time Neural Ordinary Differential Equations [0.2538209532048866]
We propose a novel approach to model uncertainty in NODE by considering a distribution over the end-time $T$ of the ODE solver.
We also propose, adaptive latent time NODE (ALT-NODE), which allow each data point to have a distinct posterior distribution over end-times.
We demonstrate the effectiveness of the proposed approaches in modelling uncertainty and robustness through experiments on synthetic and several real-world image classification data.
arXiv Detail & Related papers (2021-12-23T17:31:47Z) - Improving Robustness and Uncertainty Modelling in Neural Ordinary
Differential Equations [0.2538209532048866]
We propose a novel approach to model uncertainty in NODE by considering a distribution over the end-time $T$ of the ODE solver.
We also propose, adaptive latent time NODE (ALT-NODE), which allow each data point to have a distinct posterior distribution over end-times.
We demonstrate the effectiveness of the proposed approaches in modelling uncertainty and robustness through experiments on synthetic and several real-world image classification data.
arXiv Detail & Related papers (2021-12-23T16:56:10Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Model-based micro-data reinforcement learning: what are the crucial
model properties and which model to choose? [0.2836066255205732]
We contribute to micro-data model-based reinforcement learning (MBRL) by rigorously comparing popular generative models.
We find that on an environment that requires multimodal posterior predictives, mixture density nets outperform all other models by a large margin.
We also found that deterministic models are on par, in fact they consistently (although non-significantly) outperform their probabilistic counterparts.
arXiv Detail & Related papers (2021-07-24T11:38:25Z) - Gaussian Process Latent Class Choice Models [7.992550355579791]
We present a non-parametric class of probabilistic machine learning within discrete choice models (DCMs)
The proposed model would assign individuals probabilistically to behaviorally homogeneous clusters (latent classes) using GPs.
The model is tested on two different mode choice applications and compared against different LCCM benchmarks.
arXiv Detail & Related papers (2021-01-28T19:56:42Z) - Dynamic Bayesian Approach for decision-making in Ego-Things [8.577234269009042]
This paper presents a novel approach to detect abnormalities in dynamic systems based on multisensory data and feature selection.
Growing neural gas (GNG) is employed for clustering multisensory data into a set of nodes.
Our method uses a Markov Jump particle filter (MJPF) for state estimation and abnormality detection.
arXiv Detail & Related papers (2020-10-28T11:38:51Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.