Resource Allocation to Agents with Restrictions: Maximizing Likelihood
with Minimum Compromise
- URL: http://arxiv.org/abs/2209.05170v1
- Date: Mon, 12 Sep 2022 11:58:19 GMT
- Title: Resource Allocation to Agents with Restrictions: Maximizing Likelihood
with Minimum Compromise
- Authors: Yohai Trabelsi, Abhijin Adiga, Sarit Kraus, S.S. Ravi
- Abstract summary: We show that a Principle chooses a maximum matching randomly so that each agent is matched to a resource with some probability.
Agents would like to improve their chances of being matched by modifying their restrictions within certain limits.
We experimentally evaluate our methods on synthetic datasets as well as on two novel real-world datasets.
- Score: 28.2469613376685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many scenarios where agents with restrictions compete for resources can be
cast as maximum matching problems on bipartite graphs. Our focus is on resource
allocation problems where agents may have restrictions that make them
incompatible with some resources. We assume that a Principle chooses a maximum
matching randomly so that each agent is matched to a resource with some
probability. Agents would like to improve their chances of being matched by
modifying their restrictions within certain limits. The Principle's goal is to
advise an unsatisfied agent to relax its restrictions so that the total cost of
relaxation is within a budget (chosen by the agent) and the increase in the
probability of being assigned a resource is maximized. We establish hardness
results for some variants of this budget-constrained maximization problem and
present algorithmic results for other variants. We experimentally evaluate our
methods on synthetic datasets as well as on two novel real-world datasets: a
vacation activities dataset and a classrooms dataset.
Related papers
- Best Arm Identification with Resource Constraints [6.236743421605786]
We study the Best Arm Identification with Resource Constraints (BAIwRC) problem.
The agent aims to identify the best arm under resource constraints, where resources are consumed for each arm pull.
We design and analyze the Successive Halving with Resource Rationing algorithm (SH-RR)
arXiv Detail & Related papers (2024-02-29T12:17:54Z) - Cooperative Thresholded Lasso for Sparse Linear Bandit [6.52540785559241]
We present a novel approach to address the multi-agent sparse contextual linear bandit problem.
It is first algorithm that tackles row-wise distributed data in sparse linear bandits.
It is widely applicable to high-dimensional multi-agent problems where efficient feature extraction is critical for minimizing regret.
arXiv Detail & Related papers (2023-05-30T16:05:44Z) - On the Complexity of Multi-Agent Decision Making: From Learning in Games
to Partial Monitoring [105.13668993076801]
A central problem in the theory of multi-agent reinforcement learning (MARL) is to understand what structural conditions and algorithmic principles lead to sample-efficient learning guarantees.
We study this question in a general framework for interactive decision making with multiple agents.
We show that characterizing the statistical complexity for multi-agent decision making is equivalent to characterizing the statistical complexity of single-agent decision making.
arXiv Detail & Related papers (2023-05-01T06:46:22Z) - Constrained Pure Exploration Multi-Armed Bandits with a Fixed Budget [4.226118870861363]
We consider a constrained, pure exploration, multi-armed bandit formulation under a fixed budget.
We propose an algorithm called textscConstrained-SR based on the Successive Rejects framework.
We show that the associated decay rate is nearly optimal relative to an information theoretic lower bound in certain special cases.
arXiv Detail & Related papers (2022-11-27T08:58:16Z) - Quantization for decentralized learning under subspace constraints [61.59416703323886]
We consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints.
We propose and study an adaptive decentralized strategy where the agents employ differential randomized quantizers to compress their estimates.
The analysis shows that, under some general conditions on the quantization noise, the strategy is stable both in terms of mean-square error and average bit rate.
arXiv Detail & Related papers (2022-09-16T09:38:38Z) - Byzantine-Robust Online and Offline Distributed Reinforcement Learning [60.970950468309056]
We consider a distributed reinforcement learning setting where multiple agents explore the environment and communicate their experiences through a central server.
$alpha$-fraction of agents are adversarial and can report arbitrary fake information.
We seek to identify a near-optimal policy for the underlying Markov decision process in the presence of these adversarial agents.
arXiv Detail & Related papers (2022-06-01T00:44:53Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z) - Present-Biased Optimization [8.775878711928591]
The paper extends the framework proposed by Akerlof (1991) for studying various aspects of human behavior related to time-inconsistent planning.
We show that the ratio between the cost of the solutions computed by present-biased agents and the cost of the optimal solutions may differ significantly depending on the problem constraints.
arXiv Detail & Related papers (2020-12-29T12:40:59Z) - Coordinated Online Learning for Multi-Agent Systems with Coupled
Constraints and Perturbed Utility Observations [91.02019381927236]
We introduce a novel method to steer the agents toward a stable population state, fulfilling the given resource constraints.
The proposed method is a decentralized resource pricing method based on the resource loads resulting from the augmentation of the game's Lagrangian.
arXiv Detail & Related papers (2020-10-21T10:11:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.