Thou Shalt not Pick all Items if Thou are First: of Strategyproof and
Fair Picking Sequences
- URL: http://arxiv.org/abs/2301.06086v1
- Date: Wed, 11 Jan 2023 13:04:51 GMT
- Title: Thou Shalt not Pick all Items if Thou are First: of Strategyproof and
Fair Picking Sequences
- Authors: Sylvain Bouveret, Hugo Gilbert, J\'er\^ome Lang, Guillaume M\'erou\'e
- Abstract summary: We study how to balance priority in the sequence and number of items received.
For several meaningful choices of parameters, we show that the optimal sequence can be computed in a simple way.
- Score: 7.2834950390171205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When allocating indivisible items to agents, it is known that the only
strategyproof mechanisms that satisfy a set of rather mild conditions are
constrained serial dictatorships: given a fixed order over agents, at each step
the designated agent chooses a given number of items (depending on her position
in the sequence). With these rules, also known as non-interleaving picking
sequences, agents who come earlier in the sequence have a larger choice of
items. However, this advantage can be compensated by a higher number of items
received by those who come later. How to balance priority in the sequence and
number of items received is a nontrivial question. We use a previous model,
parameterized by a mapping from ranks to scores, a social welfare functional,
and a distribution over preference profiles. For several meaningful choices of
parameters, we show that the optimal sequence can be computed in polynomial
time. Last, we give a simple procedure for eliciting scoring vectors and we
study the impact of the assignment from agents to positions on the ex-post
social welfare.
Related papers
- Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering [55.15192437680943]
Generative models lack rigorous statistical guarantees for their outputs.
We propose a sequential conformal prediction method producing prediction sets that satisfy a rigorous statistical guarantee.
This guarantee states that with high probability, the prediction sets contain at least one admissible (or valid) example.
arXiv Detail & Related papers (2024-10-02T15:26:52Z) - Learning Submodular Sequencing from Samples [11.528995186765751]
This paper addresses the problem of selecting and ranking items in a sequence to optimize some composite submodular function.
We present an algorithm that achieves an approximation ratio dependent on the curvature of the individual submodular functions.
arXiv Detail & Related papers (2024-09-09T01:33:13Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Multi-Weight Ranking for Multi-Criteria Decision Making [0.0]
Cone distribution functions from statistics are turned into Multi-Criteria Decision Making tools.
The ranking functions are then extended to providing unary indicators for set preferences.
A potential application in machine learning is outlined.
arXiv Detail & Related papers (2023-12-04T11:13:42Z) - Pure Exploration under Mediators' Feedback [63.56002444692792]
Multi-armed bandits are a sequential-decision-making framework, where, at each interaction step, the learner selects an arm and observes a reward.
We consider the scenario in which the learner has access to a set of mediators, each of which selects the arms on the agent's behalf according to a and possibly unknown policy.
We propose a sequential decision-making strategy for discovering the best arm under the assumption that the mediators' policies are known to the learner.
arXiv Detail & Related papers (2023-08-29T18:18:21Z) - The Choice of Noninformative Priors for Thompson Sampling in
Multiparameter Bandit Models [56.31310344616837]
Thompson sampling (TS) has been known for its outstanding empirical performance supported by theoretical guarantees across various reward models.
This study explores the impact of selecting noninformative priors, offering insights into the performance of TS when dealing with new models that lack theoretical understanding.
arXiv Detail & Related papers (2023-02-28T08:42:42Z) - Selection by Prediction with Conformal p-values [7.917044695538599]
We study screening procedures that aim to select candidates whose unobserved outcomes exceed user-specified values.
We develop a method that wraps around any prediction model to produce a subset of candidates while controlling the proportion of falsely selected units.
arXiv Detail & Related papers (2022-10-04T06:34:49Z) - Probabilistic Planning with Partially Ordered Preferences over Temporal
Goals [22.77805882908817]
We study planning in Markov decision processes (MDPs) with preferences over temporally extended goals.
We introduce a variant of deterministic finite automaton, referred to as a preference DFA, for specifying the user's preferences over temporally extended goals.
We prove that a weak-stochastic nondominated policy given the preference specification is optimal in the constructed multi-objective MDP.
arXiv Detail & Related papers (2022-09-25T17:13:24Z) - Diversified Recommendations for Agents with Adaptive Preferences [9.578114969867258]
We consider a problem where an Agent visits a platform recommending a menu of content to select from, their choice of item depends not only on fixed preferences, but also on their prior engagements with the platform.
The Recommender presents a menu of $k$ items to the Agent, who selects one item in the menu according to their unknown preference model.
The Recommender then observes the Agent's chosen item and receives bandit feedback of the item's reward.
In addition to optimizing reward from selected items, the Recommender must also ensure that the total distribution of chosen items has sufficiently high entropy.
arXiv Detail & Related papers (2022-09-20T16:12:22Z) - Reforming an Envy-Free Matching [3.615389896666528]
We consider the problem of reforming an envy-free matching when each agent is assigned a single item.
We consider an operation to exchange the item of an agent with an unassigned item preferred by the agent that results in another envy-free matching.
arXiv Detail & Related papers (2022-07-06T13:03:49Z) - Probabilistic Permutation Graph Search: Black-Box Optimization for
Fairness in Ranking [53.94413894017409]
We present a novel way of representing permutation distributions, based on the notion of permutation graphs.
Similar to PL, our distribution representation, called PPG, can be used for black-box optimization of fairness.
arXiv Detail & Related papers (2022-04-28T20:38:34Z) - Learning over No-Preferred and Preferred Sequence of Items for Robust
Recommendation (Extended Abstract) [69.50145858681951]
We propose a theoretically supported sequential strategy for training a large-scale Recommender System (RS) over implicit feedback.
We present two variants of this strategy where model parameters are updated using either the momentum method or a gradient-based approach.
arXiv Detail & Related papers (2022-02-26T22:29:43Z) - Sequential Recommendation via Stochastic Self-Attention [68.52192964559829]
Transformer-based approaches embed items as vectors and use dot-product self-attention to measure the relationship between items.
We propose a novel textbfSTOchastic textbfSelf-textbfAttention(STOSA) to overcome these issues.
We devise a novel Wasserstein Self-Attention module to characterize item-item position-wise relationships in sequences.
arXiv Detail & Related papers (2022-01-16T12:38:45Z) - Modeling Sequences as Distributions with Uncertainty for Sequential
Recommendation [63.77513071533095]
Most existing sequential methods assume users are deterministic.
Item-item transitions might fluctuate significantly in several item aspects and exhibit randomness of user interests.
We propose a Distribution-based Transformer Sequential Recommendation (DT4SR) which injects uncertainties into sequential modeling.
arXiv Detail & Related papers (2021-06-11T04:35:21Z) - Learning over no-Preferred and Preferred Sequence of items for Robust
Recommendation [66.8722561224499]
We propose a theoretically founded sequential strategy for training large-scale Recommender Systems (RS) over implicit feedback.
We present two variants of this strategy where model parameters are updated using either the momentum method or a gradient-based approach.
arXiv Detail & Related papers (2020-12-12T22:10:15Z) - Adaptive Cascade Submodular Maximization [19.29174615532181]
We study the cascade submodular problem under the adaptive setting.
Our objective is to identify the best sequence of selecting items so as to maximize the expected utility of the selected items.
arXiv Detail & Related papers (2020-07-07T16:21:56Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z) - SetRank: A Setwise Bayesian Approach for Collaborative Ranking from
Implicit Feedback [50.13745601531148]
We propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to accommodate the characteristics of implicit feedback in recommender system.
Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons.
We also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $sqrtM/N$.
arXiv Detail & Related papers (2020-02-23T06:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.