Learning Optimal Tax Design in Nonatomic Congestion Games
- URL: http://arxiv.org/abs/2402.07437v1
- Date: Mon, 12 Feb 2024 06:32:53 GMT
- Title: Learning Optimal Tax Design in Nonatomic Congestion Games
- Authors: Qiwen Cui, Maryam Fazel and Simon S. Du
- Abstract summary: We study how to learn the optimal tax design to maximize the efficiency in nonatomic congestion games.
It is known that self-interested behavior among the players can alleviate the system's efficiency.
- Score: 63.89699366726275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study how to learn the optimal tax design to maximize the efficiency in
nonatomic congestion games. It is known that self-interested behavior among the
players can damage the system's efficiency. Tax mechanisms is a common method
to alleviate this issue and induce socially optimal behavior. In this work, we
take the initial step for learning the optimal tax that can minimize the social
cost with \emph{equilibrium feedback}, i.e., the tax designer can only observe
the equilibrium state under the enforced tax. Existing algorithms are not
applicable due to the exponentially large tax function space, nonexistence of
the gradient, and nonconvexity of the objective. To tackle these challenges,
our algorithm leverages several novel components: (1) piece-wise linear tax to
approximate the optimal tax; (2) an extra linear term to guarantee a strongly
convex potential function; (3) efficient subroutine to find the ``boundary''
tax. The algorithm can find an $\epsilon$-optimal tax with $O(\beta
F^2/\epsilon)$ sample complexity, where $\beta$ is the smoothness of the cost
function and $F$ is the number of facilities.
Related papers
- Learning to Control Unknown Strongly Monotone Games [16.327788209492805]
We propose a simple algorithm that learns to shift the NE of the game to meet the linear constraints by adjusting the controlled coefficients online.
We prove that our algorithm, which is based on two time-scale approximation, guarantees convergence with probability 1 to the set of NE that meet target linear constraints.
We demonstrate how our scheme can be applied to optimizing a global quadratic cost at NE and load balancing in resource allocation games.
arXiv Detail & Related papers (2024-06-30T03:33:42Z) - On Partial Optimal Transport: Revising the Infeasibility of Sinkhorn and
Efficient Gradient Methods [17.14725907264431]
This paper studies the Partial Optimal Transport (POT) problem between two unbalanced measures with at most $n$ supports.
We propose a novel rounding algorithm for POT, and then provide a feasible Sinkhorn procedure with a revised complexity of $mathcalwidetilde O(n2/varepsilon4)$.
arXiv Detail & Related papers (2023-12-21T15:56:09Z) - On the Potential and Limitations of Few-Shot In-Context Learning to
Generate Metamorphic Specifications for Tax Preparation Software [12.071874385139395]
Nearly 50% of taxpayers filed their individual income taxes using tax software in the U.S. in FY22.
This paper formulates the task of generating metamorphic specifications as a translation task between properties extracted from tax documents.
arXiv Detail & Related papers (2023-11-20T18:12:28Z) - Landscape Surrogate: Learning Decision Losses for Mathematical
Optimization Under Partial Information [48.784330281177446]
Recent works in learning-integrated optimization have shown promise in settings where the optimization is only partially observed or where general-purposes perform poorly without expert tuning.
We propose using a smooth and learnable Landscape Surrogate as a replacement for $fcirc mathbfg$.
This surrogate, learnable by neural networks, can be computed faster than the $mathbfg$ solver, provides dense and smooth gradients during training, can generalize to unseen optimization problems, and is efficiently learned via alternating optimization.
arXiv Detail & Related papers (2023-07-18T04:29:16Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Almost Optimal Algorithms for Two-player Markov Games with Linear
Function Approximation [92.99933928528797]
We study reinforcement learning for two-player zero-sum Markov games with simultaneous moves.
We propose an algorithm Nash-UCRL-VTR based on the principle "Optimism-in-Face-of-Uncertainty"
We show that Nash-UCRL-VTR can provably achieve an $tildeO(dHsqrtT)$ regret, where $d$ is the linear function dimension.
arXiv Detail & Related papers (2021-02-15T09:09:16Z) - Revisiting Smoothed Online Learning [70.09792747315323]
We investigate the problem of smoothed online learning, in which the online learner suffers both a hitting cost and a switching cost.
To bound the competitive ratio, we assume the hitting cost is known to the learner in each round, and investigate the greedy algorithm which simply minimizes the weighted sum of the hitting cost and the switching cost.
arXiv Detail & Related papers (2021-02-13T14:15:55Z) - Learning Zero-Sum Simultaneous-Move Markov Games Using Function
Approximation and Correlated Equilibrium [116.56359444619441]
We develop provably efficient reinforcement learning algorithms for two-player zero-sum finite-horizon Markov games.
In the offline setting, we control both players and aim to find the Nash Equilibrium by minimizing the duality gap.
In the online setting, we control a single player playing against an arbitrary opponent and aim to minimize the regret.
arXiv Detail & Related papers (2020-02-17T17:04:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.