Learning Choice Functions with Gaussian Processes
- URL: http://arxiv.org/abs/2302.00406v1
- Date: Wed, 1 Feb 2023 12:46:43 GMT
- Title: Learning Choice Functions with Gaussian Processes
- Authors: Alessio Benavoli, Dario Azzimonti, Dario Piga
- Abstract summary: In consumer theory, ranking available objects by means of preference relations yields the most common description of individual choices.
We propose a choice-model which allows an individual to express a set-valued choice.
- Score: 0.225596179391365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In consumer theory, ranking available objects by means of preference
relations yields the most common description of individual choices. However,
preference-based models assume that individuals: (1) give their preferences
only between pairs of objects; (2) are always able to pick the best preferred
object. In many situations, they may be instead choosing out of a set with more
than two elements and, because of lack of information and/or incomparability
(objects with contradictory characteristics), they may not able to select a
single most preferred object. To address these situations, we need a
choice-model which allows an individual to express a set-valued choice. Choice
functions provide such a mathematical framework. We propose a Gaussian Process
model to learn choice functions from choice-data. The proposed model assumes a
multiple utility representation of a choice function based on the concept of
Pareto rationalization, and derives a strategy to learn both the number and the
values of these latent multiple utilities. Simulation experiments demonstrate
that the proposed model outperforms the state-of-the-art methods.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Differentiating Choices via Commonality for Multiple-Choice Question Answering [54.04315943420376]
Multiple-choice question answering can provide valuable clues for choosing the right answer.
Existing models often rank each choice separately, overlooking the context provided by other choices.
We propose a novel model by differentiating choices through identifying and eliminating their commonality, called DCQA.
arXiv Detail & Related papers (2024-08-21T12:05:21Z) - DsDm: Model-Aware Dataset Selection with Datamodels [81.01744199870043]
Standard practice is to filter for examples that match human notions of data quality.
We find that selecting according to similarity with "high quality" data sources may not increase (and can even hurt) performance compared to randomly selecting data.
Our framework avoids handpicked notions of data quality, and instead models explicitly how the learning process uses train datapoints to predict on the target tasks.
arXiv Detail & Related papers (2024-01-23T17:22:00Z) - Large Language Models Are Not Robust Multiple Choice Selectors [117.72712117510953]
Multiple choice questions (MCQs) serve as a common yet important task format in the evaluation of large language models (LLMs)
This work shows that modern LLMs are vulnerable to option position changes due to their inherent "selection bias"
We propose a label-free, inference-time debiasing method, called PriDe, which separates the model's prior bias for option IDs from the overall prediction distribution.
arXiv Detail & Related papers (2023-09-07T17:44:56Z) - An Interpretable Determinantal Choice Model for Subset Selection [0.0]
This paper connects two subset choice models: intuitive random utility models and tractable determinantal point processes.
A determinantal choice model that enjoys the best of both worlds is specified.
A simulation study verifies that the model can learn a continuum of negative dependencies from data, and an applied study produces novel insights on wireless interference in LoRa networks.
arXiv Detail & Related papers (2023-02-22T16:26:38Z) - Eliciting User Preferences for Personalized Multi-Objective Decision
Making through Comparative Feedback [76.7007545844273]
We propose a multi-objective decision making framework that accommodates different user preferences over objectives.
Our model consists of a Markov decision process with a vector-valued reward function, with each user having an unknown preference vector.
We suggest an algorithm that finds a nearly optimal policy for the user using a small number of comparison queries.
arXiv Detail & Related papers (2023-02-07T23:58:19Z) - Choice functions based multi-objective Bayesian optimisation [1.0742675209112622]
We introduce a new framework for multi-objective Bayesian optimisation where the multi-objective functions can only be accessed via choice judgements.
By placing a Gaussian process prior on f and deriving a novel likelihood model for choice data, we propose a Bayesian framework for choice functions learning.
arXiv Detail & Related papers (2021-10-15T17:24:03Z) - True Few-Shot Learning with Language Models [78.42578316883271]
We evaluate the few-shot ability of LMs when held-out examples are unavailable.
Our findings suggest that prior work significantly overestimated the true few-shot ability of LMs.
arXiv Detail & Related papers (2021-05-24T17:55:51Z) - Choice Set Confounding in Discrete Choice [29.25891648918572]
Existing learning methods overlook how choice set assignment affects the data.
We adapt methods from causal inference to the discrete choice setting.
We show that accounting for choice set confounding makes choices observed in hotel booking more consistent with rational utility-maximization.
arXiv Detail & Related papers (2021-05-17T15:39:02Z) - Learning Choice Functions via Pareto-Embeddings [3.1410342959104725]
We consider the problem of learning to choose from a given set of objects, where each object is represented by a feature vector.
We propose a learning algorithm that minimizes a differentiable loss function suitable for this task.
arXiv Detail & Related papers (2020-07-14T09:34:44Z) - True to the Model or True to the Data? [9.462808515258464]
We argue that the choice comes down to whether it is desirable to be true to the model or true to the data.
We show how a different choice of value function performs better in each scenario, and how possible attributions are impacted by modeling choices.
arXiv Detail & Related papers (2020-06-29T17:54:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.