Probabilistic Serial Mechanism for Multi-Type Resource Allocation
- URL: http://arxiv.org/abs/2004.12062v1
- Date: Sat, 25 Apr 2020 05:38:06 GMT
- Title: Probabilistic Serial Mechanism for Multi-Type Resource Allocation
- Authors: Xiaoxi Guo, Sujoy Sikdar, Haibin Wang, Lirong Xia, Yongzhi Cao, Hanpin
Wang
- Abstract summary: We show that the existing multi-type probabilistic serial (MPS) mechanism satisfies the stronger efficiency notion of lexi-efficiency.
We also prove that MPS can be characterized both by leximin-ptimality and by item-wise ordinal fairness.
- Score: 32.93710648011737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multi-type resource allocation (MTRA) problems, there are p $\ge$ 2 types
of items, and n agents, who each demand one unit of items of each type, and
have strict linear preferences over bundles consisting of one item of each
type. For MTRAs with indivisible items, our first result is an impossibility
theorem that is in direct contrast to the single type (p = 1) setting: No
mechanism, the output of which is always decomposable into a probability
distribution over discrete assignments (where no item is split between agents),
can satisfy both sd-efficiency and sd-envy-freeness. To circumvent this
impossibility result, we consider the natural assumption of lexicographic
preference, and provide an extension of the probabilistic serial (PS), called
lexicographic probabilistic serial (LexiPS).We prove that LexiPS satisfies
sd-efficiency and sd-envy-freeness, retaining the desirable properties of PS.
Moreover, LexiPS satisfies sd-weak-strategyproofness when agents are not
allowed to misreport their importance orders. For MTRAs with divisible items,
we show that the existing multi-type probabilistic serial (MPS) mechanism
satisfies the stronger efficiency notion of lexi-efficiency, and is
sd-envy-free under strict linear preferences, and sd-weak-strategyproof under
lexicographic preferences. We also prove that MPS can be characterized both by
leximin-ptimality and by item-wise ordinal fairness, and the family of eating
algorithms which MPS belongs to can be characterized by no-generalized-cycle
condition.
Related papers
- Bounded and Unbiased Composite Differential Privacy [25.427802467876248]
The objective of differential privacy (DP) is to protect privacy by producing an output distribution that is indistinguishable between two neighboring databases.
Existing solutions attempt to address this issue by employing post-processing or truncation techniques.
We propose a novel differentially private mechanism which uses a composite probability density function to generate bounded and unbiased outputs.
arXiv Detail & Related papers (2023-11-04T04:43:47Z) - Tuning Pre-trained Model via Moment Probing [62.445281364055795]
We propose a novel Moment Probing (MP) method to explore the potential of LP.
MP performs a linear classification head based on the mean of final features.
Our MP significantly outperforms LP and is competitive with counterparts at less training cost.
arXiv Detail & Related papers (2023-07-21T04:15:02Z) - Double Pessimism is Provably Efficient for Distributionally Robust
Offline Reinforcement Learning: Generic Algorithm and Robust Partial Coverage [15.858892479232656]
We study robust offline reinforcement learning (robust offline RL)
We propose a generic algorithm framework called Doubly Pessimistic Model-based Policy Optimization ($P2MPO$)
We show that $P2MPO$ enjoys a $tildemathcalO(n-1/2)$ convergence rate, where $n$ is the dataset size.
arXiv Detail & Related papers (2023-05-16T17:58:05Z) - Pseudo-Spherical Contrastive Divergence [119.28384561517292]
We propose pseudo-spherical contrastive divergence (PS-CD) to generalize maximum learning likelihood of energy-based models.
PS-CD avoids the intractable partition function and provides a generalized family of learning objectives.
arXiv Detail & Related papers (2021-11-01T09:17:15Z) - Learning Numeric Optimal Differentially Private Truncated Additive
Mechanisms [5.079561894598125]
We introduce a tool to learn truncated noise for additive mechanisms with strong utility bounds.
We show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
For sensitivity bounded mechanisms, we show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
arXiv Detail & Related papers (2021-07-27T17:22:57Z) - PSD Representations for Effective Probability Models [117.35298398434628]
We show that a recently proposed class of positive semi-definite (PSD) models for non-negative functions is particularly suited to this end.
We characterize both approximation and generalization capabilities of PSD models, showing that they enjoy strong theoretical guarantees.
Our results open the way to applications of PSD models to density estimation, decision theory and inference.
arXiv Detail & Related papers (2021-06-30T15:13:39Z) - Probabilistic Generating Circuits [50.98473654244851]
We propose probabilistic generating circuits (PGCs) for their efficient representation.
PGCs are not just a theoretical framework that unifies vastly different existing models, but also show huge potential in modeling realistic data.
We exhibit a simple class of PGCs that are not trivially subsumed by simple combinations of PCs and DPPs, and obtain competitive performance on a suite of density estimation benchmarks.
arXiv Detail & Related papers (2021-02-19T07:06:53Z) - A Unified Joint Maximum Mean Discrepancy for Domain Adaptation [73.44809425486767]
This paper theoretically derives a unified form of JMMD that is easy to optimize.
From the revealed unified JMMD, we illustrate that JMMD degrades the feature-label dependence that benefits to classification.
We propose a novel MMD matrix to promote the dependence, and devise a novel label kernel that is robust to label distribution shift.
arXiv Detail & Related papers (2021-01-25T09:46:14Z) - Information-theoretic Feature Selection via Tensor Decomposition and
Submodularity [38.05393186002834]
We introduce a low-rank tensor model of the joint PMF of all variables and indirect targeting as a way of mitigating complexity and maximizing the classification performance for a given number of features.
By indirectly aiming to predict the latent variable of the naive Bayes model instead of the original target variable, it is possible to formulate the feature selection problem as of a monotone submodular function subject to a cardinality constraint.
arXiv Detail & Related papers (2020-10-30T10:36:46Z) - Exact and Approximation Algorithms for Sparse PCA [1.7640556247739623]
This paper proposes two exact mixed-integer SDPs (MISDPs)
We then analyze the theoretical optimality gaps of their continuous relaxation values and prove that they are stronger than that of the state-of-art one.
Since off-the-shelf solvers, in general, have difficulty in solving MISDPs, we approximate SPCA with arbitrary accuracy by a mixed-integer linear program (MILP) of a similar size as MISDPs.
arXiv Detail & Related papers (2020-08-28T02:07:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.