Representing and Reasoning with Multi-Stakeholder Qualitative Preference
Queries
- URL: http://arxiv.org/abs/2307.16307v1
- Date: Sun, 30 Jul 2023 19:52:59 GMT
- Title: Representing and Reasoning with Multi-Stakeholder Qualitative Preference
Queries
- Authors: Samik Basu, Vasant Honavar, Ganesh Ram Santhanam, Jia Tao
- Abstract summary: We offer the first formal treatment of reasoning with multi-stakeholder qualitative preferences.
We introduce a query for expressing queries against such preferences over sets of outcomes that satisfy specified criteria.
We present experimental results that demonstrate the feasibility of our approach.
- Score: 9.768677073327423
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Many decision-making scenarios, e.g., public policy, healthcare, business,
and disaster response, require accommodating the preferences of multiple
stakeholders. We offer the first formal treatment of reasoning with
multi-stakeholder qualitative preferences in a setting where stakeholders
express their preferences in a qualitative preference language, e.g., CP-net,
CI-net, TCP-net, CP-Theory. We introduce a query language for expressing
queries against such preferences over sets of outcomes that satisfy specified
criteria, e.g., $\mlangpref{\psi_1}{\psi_2}{A}$ (read loosely as the set of
outcomes satisfying $\psi_1$ that are preferred over outcomes satisfying
$\psi_2$ by a set of stakeholders $A$). Motivated by practical application
scenarios, we introduce and analyze several alternative semantics for such
queries, and examine their interrelationships. We provide a provably correct
algorithm for answering multi-stakeholder qualitative preference queries using
model checking in alternation-free $\mu$-calculus. We present experimental
results that demonstrate the feasibility of our approach.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Improving Context-Aware Preference Modeling for Language Models [62.32080105403915]
We consider the two-step preference modeling procedure that first resolves the under-specification by selecting a context, and then evaluates preference with respect to the chosen context.
We contribute context-conditioned preference datasets and experiments that investigate the ability of language models to evaluate context-specific preference.
arXiv Detail & Related papers (2024-07-20T16:05:17Z) - Not All Preference Pairs Are Created Equal: A Recipe for Annotation-Efficient Iterative Preference Learning [81.69044784288005]
Iterative preference learning requires online annotated preference labels.
We study strategies to select worth-annotating response pairs for cost-efficient annotation.
arXiv Detail & Related papers (2024-06-25T06:49:16Z) - Representation of preferences for multiple criteria decision aiding in a new seven-valued logic [0.4849550522970841]
We show how the seven-valued logic can be used to represent preferences in the domain of Multiple Criteria Decision Aiding.
In particular, we propose new forms of outranking and value function preference models that aggregate multiple criteria taking into account imperfect preference information.
arXiv Detail & Related papers (2024-05-31T18:59:24Z) - $Se^2$: Sequential Example Selection for In-Context Learning [83.17038582333716]
Large language models (LLMs) for in-context learning (ICL) need to be activated by demonstration examples.
Prior work has extensively explored the selection of examples for ICL, predominantly following the "select then organize" paradigm.
In this paper, we formulate the problem as a $Se$quential $Se$lection problem and introduce $Se2$, a sequential-aware method.
arXiv Detail & Related papers (2024-02-21T15:35:04Z) - Aligning Large Language Models by On-Policy Self-Judgment [49.31895979525054]
Existing approaches for aligning large language models with human preferences face a trade-off that requires a separate reward model (RM) for on-policy learning.
We present a novel alignment framework, SELF-JUDGE, that does on-policy learning and is parameter efficient.
We show that the rejecting sampling by itself can improve performance further without an additional evaluator.
arXiv Detail & Related papers (2024-02-17T11:25:26Z) - Generative Judge for Evaluating Alignment [84.09815387884753]
We propose a generative judge with 13B parameters, Auto-J, designed to address these challenges.
Our model is trained on user queries and LLM-generated responses under massive real-world scenarios.
Experimentally, Auto-J outperforms a series of strong competitors, including both open-source and closed-source models.
arXiv Detail & Related papers (2023-10-09T07:27:15Z) - Selection by Prediction with Conformal p-values [7.917044695538599]
We study screening procedures that aim to select candidates whose unobserved outcomes exceed user-specified values.
We develop a method that wraps around any prediction model to produce a subset of candidates while controlling the proportion of falsely selected units.
arXiv Detail & Related papers (2022-10-04T06:34:49Z) - Fairness in the First Stage of Two-Stage Recommender Systems [28.537935838669423]
We investigate how to ensure fairness to the items in large-scale recommender systems.
Existing first-stage recommenders might select an irrecoverably unfair set of candidates.
We propose two threshold-policy selection rules that find near-optimal sets of candidates.
arXiv Detail & Related papers (2022-05-30T21:21:38Z) - Analysing Mixed Initiatives and Search Strategies during Conversational
Search [31.63357369175702]
We present a model for conversational search -- from which we instantiate different observed conversational search strategies, where the agent elicits: (i) Feedback-First, or (ii) Feedback-After.
Our analysis reveals that there is no superior or dominant combination, instead it shows that query clarifications are better when asked first, while query suggestions are better when asked after presenting results.
arXiv Detail & Related papers (2021-09-13T13:30:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.