Cautious Learning of Multiattribute Preferences
- URL: http://arxiv.org/abs/2206.07341v1
- Date: Wed, 15 Jun 2022 07:54:16 GMT
- Title: Cautious Learning of Multiattribute Preferences
- Authors: Hugo Gilbert (LAMSADE), Mohamed Ouaguenouni, Meltem Ozturk, Olivier
Spanjaard
- Abstract summary: This paper is dedicated to a cautious learning methodology for predicting preferences between alternatives characterized by binary attributes.
By "cautious", we mean that the model learned to represent the multi-attribute preferences is general enough to be compatible with any strict weak order on the alternatives.
- Score: 2.6151761714896122
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is dedicated to a cautious learning methodology for predicting
preferences between alternatives characterized by binary attributes (formally,
each alternative is seen as a subset of attributes). By "cautious", we mean
that the model learned to represent the multi-attribute preferences is general
enough to be compatible with any strict weak order on the alternatives, and
that we allow ourselves not to predict some preferences if the data collected
are not compatible with a reliable prediction. A predicted preference will be
considered reliable if all the simplest models (following Occam's razor
principle) explaining the training data agree on it. Predictions are based on
an ordinal dominance relation between alternatives [Fishburn and LaValle,
1996]. The dominance relation relies on an uncertainty set encompassing the
possible values of the parameters of the multi-attribute utility function.
Numerical tests are provided to evaluate the richness and the reliability of
the predictions made.
Related papers
- Multi-model Ensemble Conformal Prediction in Dynamic Environments [14.188004615463742]
We introduce a novel adaptive conformal prediction framework, where the model used for creating prediction sets is selected on the fly from multiple candidate models.
The proposed algorithm is proven to achieve strongly adaptive regret over all intervals while maintaining valid coverage.
arXiv Detail & Related papers (2024-11-06T05:57:28Z) - Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.
Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.
We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Predicting generalization performance with correctness discriminators [64.00420578048855]
We present a novel model that establishes upper and lower bounds on the accuracy, without requiring gold labels for the unseen data.
We show across a variety of tagging, parsing, and semantic parsing tasks that the gold accuracy is reliably between the predicted upper and lower bounds.
arXiv Detail & Related papers (2023-11-15T22:43:42Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Robust Ordinal Regression for Subsets Comparisons with Interactions [2.6151761714896122]
This paper is dedicated to a robust ordinal method for learning the preferences of a decision maker between subsets.
The decision model, derived from Fishburn and LaValle, is general enough to be compatible with any strict weak order on subsets.
A predicted preference is considered reliable if all the simplest models (Occam's razor) explaining the preference data agree on it.
arXiv Detail & Related papers (2023-08-07T07:54:33Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - Calibrated Selective Classification [34.08454890436067]
We develop a new approach to selective classification in which we propose a method for rejecting examples with "uncertain" uncertainties.
We present a framework for learning selectively calibrated models, where a separate selector network is trained to improve the selective calibration error of a given base model.
We demonstrate the empirical effectiveness of our approach on multiple image classification and lung cancer risk assessment tasks.
arXiv Detail & Related papers (2022-08-25T13:31:09Z) - $\Delta$-UQ: Accurate Uncertainty Quantification via Anchor
Marginalization [40.581619201120716]
We present $Delta$UQ -- a novel, general-purpose uncertainty estimator using the concept of anchoring in predictive models.
We find this uncertainty is deeply connected to improper sampling of the input data, and inherent noise, enabling us to estimate the total uncertainty in any system.
arXiv Detail & Related papers (2021-10-05T17:44:31Z) - Employing an Adjusted Stability Measure for Multi-Criteria Model Fitting
on Data Sets with Similar Features [0.1127980896956825]
We show that our approach achieves the same or better predictive performance compared to the two established approaches.
Our approach succeeds at selecting the relevant features while avoiding irrelevant or redundant features.
For data sets with many similar features, the feature selection stability must be evaluated with an adjusted stability measure.
arXiv Detail & Related papers (2021-06-15T12:48:07Z) - Invariant Rationalization [84.1861516092232]
A typical rationalization criterion, i.e. maximum mutual information (MMI), finds the rationale that maximizes the prediction performance based only on the rationale.
We introduce a game-theoretic invariant rationalization criterion where the rationales are constrained to enable the same predictor to be optimal across different environments.
We show both theoretically and empirically that the proposed rationales can rule out spurious correlations, generalize better to different test scenarios, and align better with human judgments.
arXiv Detail & Related papers (2020-03-22T00:50:27Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.