Deep Contract Design via Discontinuous Networks
- URL: http://arxiv.org/abs/2307.02318v2
- Date: Fri, 27 Oct 2023 14:31:51 GMT
- Title: Deep Contract Design via Discontinuous Networks
- Authors: Tonghan Wang, Paul D\"utting, Dmitry Ivanov, Inbal Talgam-Cohen, David
C. Parkes
- Abstract summary: We introduce a novel representation: the Discontinuous ReLU (DeLU) network, which models the principal's utility as a discontinuous piecewise affine function of the design of a contract.
DeLU networks implicitly learn closed-form expressions for the incentive compatibility constraints of the agent and the utility objective of the principal.
We provide empirical results that demonstrate success in approximating the principal's utility function with a small number of training samples and scaling to find approximately optimal contracts on problems with a large number of actions and outcomes.
- Score: 23.293185030103544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contract design involves a principal who establishes contractual agreements
about payments for outcomes that arise from the actions of an agent. In this
paper, we initiate the study of deep learning for the automated design of
optimal contracts. We introduce a novel representation: the Discontinuous ReLU
(DeLU) network, which models the principal's utility as a discontinuous
piecewise affine function of the design of a contract where each piece
corresponds to the agent taking a particular action. DeLU networks implicitly
learn closed-form expressions for the incentive compatibility constraints of
the agent and the utility maximization objective of the principal, and support
parallel inference on each piece through linear programming or interior-point
methods that solve for optimal contracts. We provide empirical results that
demonstrate success in approximating the principal's utility function with a
small number of training samples and scaling to find approximately optimal
contracts on problems with a large number of actions and outcomes.
Related papers
- The Pseudo-Dimension of Contracts [8.710927418537908]
Algorithmic contract design studies scenarios where a principal incentivizes an agent to exert effort on her behalf.
In this work, we focus on settings where the agent's type is drawn from an unknown distribution, and an offline learning framework for learning near-optimal contracts from sample agent types.
A central tool in our analysis is the notion of pseudo-dimension from statistical learning theory.
arXiv Detail & Related papers (2025-01-24T13:13:50Z) - Contractual Reinforcement Learning: Pulling Arms with Invisible Hands [68.77645200579181]
We propose a theoretical framework for aligning economic interests of different stakeholders in the online learning problems through contract design.
For the planning problem, we design an efficient dynamic programming algorithm to determine the optimal contracts against the far-sighted agent.
For the learning problem, we introduce a generic design of no-regret learning algorithms to untangle the challenges from robust design of contracts to the balance of exploration and exploitation.
arXiv Detail & Related papers (2024-07-01T16:53:00Z) - New Perspectives in Online Contract Design [2.296475290901356]
This work studies the repeated principal-agent problem from an online learning perspective.
The principal's goal is to learn the optimal contract that maximizes her utility through repeated interactions.
arXiv Detail & Related papers (2024-03-11T20:28:23Z) - Learning Optimal Contracts: How to Exploit Small Action Spaces [37.92189925462977]
We study principal-agent problems in which a principal commits to an outcome-dependent payment scheme.
We design an algorithm that learns an approximately-optimal contract with high probability.
It can also be employed to provide a $tildemathcalO(T4/5)$ regret bound in the related online learning setting.
arXiv Detail & Related papers (2023-09-18T14:18:35Z) - Delegating Data Collection in Decentralized Machine Learning [67.0537668772372]
Motivated by the emergence of decentralized machine learning (ML) ecosystems, we study the delegation of data collection.
We design optimal and near-optimal contracts that deal with two fundamental information asymmetries.
We show that a principal can cope with such asymmetry via simple linear contracts that achieve 1-1/e fraction of the optimal utility.
arXiv Detail & Related papers (2023-09-04T22:16:35Z) - Delegated Classification [21.384062337682185]
We propose a theoretical framework for incentive-aware delegation of machine learning tasks.
We define budget-optimal contracts and prove they take a simple threshold form under reasonable assumptions.
Empirically, we demonstrate that budget-optimal contracts can be constructed using small-scale data.
arXiv Detail & Related papers (2023-06-20T11:59:03Z) - Learning to Incentivize Information Acquisition: Proper Scoring Rules
Meet Principal-Agent Model [64.94131130042275]
We study the incentivized information acquisition problem, where a principal hires an agent to gather information on her behalf.
We design a provably sample efficient algorithm that tailors the UCB algorithm to our model.
Our algorithm features a delicate estimation procedure for the optimal profit of the principal, and a conservative correction scheme that ensures the desired agent's actions are incentivized.
arXiv Detail & Related papers (2023-03-15T13:40:16Z) - Semantic Information Marketing in The Metaverse: A Learning-Based
Contract Theory Framework [68.8725783112254]
We address the problem of designing incentive mechanisms by a virtual service provider (VSP) to hire sensing IoT devices to sell their sensing data.
Due to the limited bandwidth, we propose to use semantic extraction algorithms to reduce the delivered data by the sensing IoT devices.
We propose a novel iterative contract design and use a new variant of multi-agent reinforcement learning (MARL) to solve the modelled multi-dimensional contract problem.
arXiv Detail & Related papers (2023-02-22T15:52:37Z) - Sequential Information Design: Markov Persuasion Process and Its
Efficient Reinforcement Learning [156.5667417159582]
This paper proposes a novel model of sequential information design, namely the Markov persuasion processes (MPPs)
Planning in MPPs faces the unique challenge in finding a signaling policy that is simultaneously persuasive to the myopic receivers and inducing the optimal long-term cumulative utilities of the sender.
We design a provably efficient no-regret learning algorithm, the Optimism-Pessimism Principle for Persuasion Process (OP4), which features a novel combination of both optimism and pessimism principles.
arXiv Detail & Related papers (2022-02-22T05:41:43Z) - A Contract Theory based Incentive Mechanism for Federated Learning [52.24418084256517]
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
arXiv Detail & Related papers (2021-08-12T07:30:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.