An Interpretable Determinantal Choice Model for Subset Selection
- URL: http://arxiv.org/abs/2302.11477v1
- Date: Wed, 22 Feb 2023 16:26:38 GMT
- Title: An Interpretable Determinantal Choice Model for Subset Selection
- Authors: Sander Aarts and David B. Shmoys and Alex Coy
- Abstract summary: This paper connects two subset choice models: intuitive random utility models and tractable determinantal point processes.
A determinantal choice model that enjoys the best of both worlds is specified.
A simulation study verifies that the model can learn a continuum of negative dependencies from data, and an applied study produces novel insights on wireless interference in LoRa networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding how subsets of items are chosen from offered sets is critical
to assortment planning, wireless network planning, and many other applications.
There are two seemingly unrelated subset choice models that capture
dependencies between items: intuitive and interpretable random utility models;
and tractable determinantal point processes (DPPs). This paper connects the
two. First, all DPPs are shown to be random utility models. Next, a
determinantal choice model that enjoys the best of both worlds is specified;
the model is shown to subsume logistic regression when dependence is minimal,
and MNL when dependence is maximally negative. This makes the model
interpretable, while retaining the tractability of DPPs. A simulation study
verifies that the model can learn a continuum of negative dependencies from
data, and an applied study using original experimental data produces novel
insights on wireless interference in LoRa networks.
Related papers
- Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - Secrets of RLHF in Large Language Models Part II: Reward Modeling [134.97964938009588]
We introduce a series of novel methods to mitigate the influence of incorrect and ambiguous preferences in the dataset.
We also introduce contrastive learning to enhance the ability of reward models to distinguish between chosen and rejected responses.
arXiv Detail & Related papers (2024-01-11T17:56:59Z) - Response Time Improves Choice Prediction and Function Estimation for
Gaussian Process Models of Perception and Preferences [4.6584146134061095]
Models for human choice prediction in preference learning and psychophysics often consider only binary response data.
We propose a novel differentiable approximation to the diffusion decision model (DDM) likelihood.
We then use this new likelihood to incorporate RTs into GP models for binary choices.
arXiv Detail & Related papers (2023-06-09T23:22:49Z) - A Nonparametric Approach with Marginals for Modeling Consumer Choice [5.829992438125586]
The marginal distribution model (MDM) is inspired by the usefulness of similar characterizations for the random utility model (RUM)
We show that MDM exhibits competitive power and prediction performance compared to RUM and parametric models.
arXiv Detail & Related papers (2022-08-12T04:43:26Z) - Sharing pattern submodels for prediction with missing values [12.981974894538668]
Missing values are unavoidable in many applications of machine learning and present challenges both during training and at test time.
We propose an alternative approach, called sharing pattern submodels, which i) makes predictions robust to missing values at test time, ii) maintains or improves the predictive power of pattern submodels andiii) has a short description, enabling improved interpretability.
arXiv Detail & Related papers (2022-06-22T15:09:40Z) - On the Efficacy of Adversarial Data Collection for Question Answering:
Results from a Large-Scale Randomized Study [65.17429512679695]
In adversarial data collection (ADC), a human workforce interacts with a model in real time, attempting to produce examples that elicit incorrect predictions.
Despite ADC's intuitive appeal, it remains unclear when training on adversarial datasets produces more robust models.
arXiv Detail & Related papers (2021-06-02T00:48:33Z) - Paired Examples as Indirect Supervision in Latent Decision Models [109.76417071249945]
We introduce a way to leverage paired examples that provide stronger cues for learning latent decisions.
We apply our method to improve compositional question answering using neural module networks on the DROP dataset.
arXiv Detail & Related papers (2021-04-05T03:58:30Z) - Disentangled Recurrent Wasserstein Autoencoder [17.769077848342334]
recurrent Wasserstein Autoencoder (R-WAE) is a new framework for generative modeling of sequential data.
R-WAE disentangles the representation of an input sequence into static and dynamic factors.
Our models outperform other baselines with the same settings in terms of disentanglement and unconditional video generation.
arXiv Detail & Related papers (2021-01-19T07:43:25Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - DGSAC: Density Guided Sampling and Consensus [4.808421423598809]
Kernel Residual Density is a key differentiator between inliers and outliers.
We propose two model selection algorithms, an optimal quadratic program based, and a greedy.
We evaluate our method on a wide variety of tasks like planar segmentation, motion segmentation, vanishing point estimation, plane fitting to 3D point cloud, line, and circle fitting.
arXiv Detail & Related papers (2020-06-03T17:42:53Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.