Nash Welfare and Facility Location
- URL: http://arxiv.org/abs/2310.04102v1
- Date: Fri, 6 Oct 2023 09:06:44 GMT
- Title: Nash Welfare and Facility Location
- Authors: Alexander Lam, Haris Aziz, Toby Walsh
- Abstract summary: We consider the problem of locating a facility to serve a set of agents located along a line.
The Nash welfare objective function, defined as the product of the agents' utilities, is known to provide a compromise between fairness and efficiency.
- Score: 82.81742334337336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of locating a facility to serve a set of agents
located along a line. The Nash welfare objective function, defined as the
product of the agents' utilities, is known to provide a compromise between
fairness and efficiency in resource allocation problems. We apply this welfare
notion to the facility location problem, converting individual costs to
utilities and analyzing the facility placement that maximizes the Nash welfare.
We give a polynomial-time approximation algorithm to compute this facility
location, and prove results suggesting that it achieves a good balance of
fairness and efficiency. Finally, we take a mechanism design perspective and
propose a strategy-proof mechanism with a bounded approximation ratio for Nash
welfare.
Related papers
- Value-oriented forecast reconciliation for renewables in electricity markets [0.0]
We propose a value-oriented forecast reconciliation approach that focuses on the forecast value for individual agents.
Specifically, we model this problem as a cooperative bargaining game, where each agent aims to optimize their own gain.
From an application perspective, we consider an aggregated wind energy trading problem, where profits are distributed using a weighted allocation rule.
arXiv Detail & Related papers (2025-01-27T14:35:39Z) - Fair and Welfare-Efficient Constrained Multi-matchings under Uncertainty [17.364297444721057]
We study fair allocation of constrained resources, where a market designer optimize overall welfare while maintaining group fairness.
In many large-scale settings, utilities are not known in advance, but are instead observed after realizing the allocation.
We discuss these trade-offs under two paradigms for preference modeling.
arXiv Detail & Related papers (2024-11-04T22:42:34Z) - Facility Location Games with Scaling Effects [63.421996606381164]
We take the classic facility location problem and consider a variation in which each agent's individual cost function is equal to their distance from the facility multiplied by a scaling factor.
We focus on the objectives of total and maximum cost, describing the computation of the optimal solution.
We characterize the conditions on scaling functions which ensure that agents have single-peaked preferences.
arXiv Detail & Related papers (2024-02-29T07:08:18Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Proportional Fairness in Obnoxious Facility Location [70.64736616610202]
We propose a hierarchy of distance-based proportional fairness concepts for the problem.
We consider deterministic and randomized mechanisms, and compute tight bounds on the price of proportional fairness.
We prove existence results for two extensions to our model.
arXiv Detail & Related papers (2023-01-11T07:30:35Z) - Welfare and Fairness in Multi-objective Reinforcement Learning [1.5763562007908967]
We study fair multi-objective reinforcement learning in which an agent must learn a policy that simultaneously achieves high reward on multiple dimensions.
We show that our algorithm is provably convergent, and we demonstrate experimentally that our approach outperforms techniques based on linear scalarization.
arXiv Detail & Related papers (2022-11-30T01:40:59Z) - Provably Efficient Reward-Agnostic Navigation with Linear Value
Iteration [143.43658264904863]
We show how iteration under a more standard notion of low inherent Bellman error, typically employed in least-square value-style algorithms, can provide strong PAC guarantees on learning a near optimal value function.
We present a computationally tractable algorithm for the reward-free setting and show how it can be used to learn a near optimal policy for any (linear) reward function.
arXiv Detail & Related papers (2020-08-18T04:34:21Z) - Provably Efficient Safe Exploration via Primal-Dual Policy Optimization [105.7510838453122]
We study the Safe Reinforcement Learning (SRL) problem using the Constrained Markov Decision Process (CMDP) formulation.
We present an provably efficient online policy optimization algorithm for CMDP with safe exploration in the function approximation setting.
arXiv Detail & Related papers (2020-03-01T17:47:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.