Desirable Effort Fairness and Optimality Trade-offs in Strategic Learning
- URL: http://arxiv.org/abs/2510.19098v1
- Date: Tue, 21 Oct 2025 21:43:20 GMT
- Title: Desirable Effort Fairness and Optimality Trade-offs in Strategic Learning
- Authors: Valia Efthymiou, Ekaterina Fedorova, Chara Podimata,
- Abstract summary: We study how decision rules interact with agents who may strategically change their inputs/features to achieve better outcomes.<n>We propose a unified model of principal-agent interaction that captures this trade-off.
- Score: 4.702729080310267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Strategic learning studies how decision rules interact with agents who may strategically change their inputs/features to achieve better outcomes. In standard settings, models assume that the decision-maker's sole scope is to learn a classifier that maximizes an objective (e.g., accuracy) assuming that agents best respond. However, real decision-making systems' goals do not align exclusively with producing good predictions. They may consider the external effects of inducing certain incentives, which translates to the change of certain features being more desirable for the decision maker. Further, the principal may also need to incentivize desirable feature changes fairly across heterogeneous agents. How much does this constrained optimization (i.e., maximize the objective, but restrict agents' incentive disparity) cost the principal? We propose a unified model of principal-agent interaction that captures this trade-off under three additional components: (1) causal dependencies between features, such that changes in one feature affect others; (2) heterogeneous manipulation costs between agents; and (3) peer learning, through which agents infer the principal's rule. We provide theoretical guarantees on the principal's optimality loss constrained to a particular desirability fairness tolerance for multiple broad classes of fairness measures. Finally, through experiments on real datasets, we show the explicit tradeoff between maximizing accuracy and fairness in desirability effort.
Related papers
- Prior preferences in active inference agents: soft, hard, and goal shaping [3.2776596620344285]
Active inference proposes expected free energy as an objective to balance exploitative and explorative drives in learning agents.<n>We consider four possible ways of defining the preference distribution, either providing the agents with hard or soft goals.<n>We show that goal shaping enables the best performance overall (i.e., it promotes exploitation) while sacrificing learning about the environment's transition dynamics.
arXiv Detail & Related papers (2025-12-02T23:07:24Z) - Learning to Lead: Incentivizing Strategic Agents in the Dark [50.93875404941184]
We study an online learning version of the generalized principal-agent model.<n>We develop the first provably sample-efficient algorithm for this challenging setting.<n>We establish a near optimal $tildeO(sqrtT) $ regret bound for learning the principal's optimal policy.
arXiv Detail & Related papers (2025-06-10T04:25:04Z) - Joint Scoring Rules: Zero-Sum Competition Avoids Performative Prediction [0.0]
In a decision-making scenario, a principal could use conditional predictions from an expert agent to inform their choice.<n>An agent optimizing for predictive accuracy is incentivized to manipulate their principal towards more predictable actions, which prevents that principal from being able to deterministically select their true preference.<n>We demonstrate that this impossibility result can be overcome through the joint evaluation of multiple agents.
arXiv Detail & Related papers (2024-12-30T06:06:45Z) - Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.<n>We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.<n>We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - Emulating Full Participation: An Effective and Fair Client Selection Strategy for Federated Learning [50.060154488277036]
In federated learning, client selection is a critical problem that significantly impacts both model performance and fairness.<n>We propose two guiding principles that tackle the inherent conflict between the two metrics while reinforcing each other.<n>Our approach adaptively enhances this diversity by selecting clients based on their data distributions, thereby improving both model performance and fairness.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Learning under Imitative Strategic Behavior with Unforeseeable Outcomes [14.80947863438795]
We propose a Stackelberg game to model the interplay between individuals and the decision-maker.
We show that the objective difference between the two can be decomposed into three interpretable terms.
arXiv Detail & Related papers (2024-05-03T00:53:58Z) - Causal Strategic Learning with Competitive Selection [10.237954203296187]
We study the problem of agent selection in causal strategic learning under multiple decision makers.
We show that the optimal selection rule is a trade-off between selecting the best agents and providing incentives to maximise the agents' improvement.
We provide a cooperative protocol which all decision makers must collectively adopt to recover the true causal parameters.
arXiv Detail & Related papers (2023-08-30T18:43:11Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Learning to be Fair: A Consequentialist Approach to Equitable
Decision-Making [21.152377319502705]
We present an alternative framework for designing equitable algorithms.
In our approach, one first elicits stakeholder preferences over the space of possible decisions.
We then optimize over the space of decision policies, making trade-offs in a way that maximizes the elicited utility.
arXiv Detail & Related papers (2021-09-18T00:30:43Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - Why do you think that? Exploring Faithful Sentence-Level Rationales
Without Supervision [60.62434362997016]
We propose a differentiable training-framework to create models which output faithful rationales on a sentence level.
Our model solves the task based on each rationale individually and learns to assign high scores to those which solved the task best.
arXiv Detail & Related papers (2020-10-07T12:54:28Z) - VCG Mechanism Design with Unknown Agent Values under Stochastic Bandit
Feedback [104.06766271716774]
We study a multi-round welfare-maximising mechanism design problem in instances where agents do not know their values.
We first define three notions of regret for the welfare, the individual utilities of each agent and that of the mechanism.
Our framework also provides flexibility to control the pricing scheme so as to trade-off between the agent and seller regrets.
arXiv Detail & Related papers (2020-04-19T18:00:58Z) - Causal Strategic Linear Regression [5.672132510411465]
In many predictive decision-making scenarios, such as credit scoring and academic testing, a decision-maker must construct a model that accounts for agents' propensity to "game" the decision rule.
We join concurrent work in modeling agents' outcomes as a function of their changeable attributes.
We provide efficient algorithms for learning decision rules that optimize three distinct decision-maker objectives.
arXiv Detail & Related papers (2020-02-24T03:57:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.