Integrative Experiments Identify How Punishment Impacts Welfare in Public Goods Games
- URL: http://arxiv.org/abs/2508.17151v1
- Date: Sat, 23 Aug 2025 22:07:27 GMT
- Title: Integrative Experiments Identify How Punishment Impacts Welfare in Public Goods Games
- Authors: Mohammed Alsobay, David G. Rand, Duncan J. Watts, Abdullah Almaatouq,
- Abstract summary: We study how punishment's impact varies across cooperative settings through a large-scale integrative experiment.<n>We develop models that outperformed human forecasters in predicting punishment outcomes in new experiments.<n>Our results refocus the debate over punishment from whether or not it "works" to the specific conditions under which it does and does not work.
- Score: 2.359185514776351
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Punishment as a mechanism for promoting cooperation has been studied extensively for more than two decades, but its effectiveness remains a matter of dispute. Here, we examine how punishment's impact varies across cooperative settings through a large-scale integrative experiment. We vary 14 parameters that characterize public goods games, sampling 360 experimental conditions and collecting 147,618 decisions from 7,100 participants. Our results reveal striking heterogeneity in punishment effectiveness: while punishment consistently increases contributions, its impact on payoffs (i.e., efficiency) ranges from dramatically enhancing welfare (up to 43% improvement) to severely undermining it (up to 44% reduction) depending on the cooperative context. To characterize these patterns, we developed models that outperformed human forecasters (laypeople and domain experts) in predicting punishment outcomes in new experiments. Communication emerged as the most predictive feature, followed by contribution framing (opt-out vs. opt-in), contribution type (variable vs. all-or-nothing), game length (number of rounds), peer outcome visibility (whether participants can see others' earnings), and the availability of a reward mechanism. Interestingly, however, most of these features interact to influence punishment effectiveness rather than operating independently. For example, the extent to which longer games increase the effectiveness of punishment depends on whether groups can communicate. Together, our results refocus the debate over punishment from whether or not it "works" to the specific conditions under which it does and does not work. More broadly, our study demonstrates how integrative experiments can be combined with machine learning to uncover generalizable patterns, potentially involving interactions between multiple features, and help generate novel explanations in complex social phenomena.
Related papers
- Human Cognitive Biases in Explanation-Based Interaction: The Case of Within and Between Session Order Effect [46.80756527630539]
Explanatory Interactive Learning (XIL) is a powerful interactive learning framework designed to enable users to customize and correct AI models by interacting with their explanations.<n>Recent studies have raised concerns that explanatory interaction may trigger order effects, a well-known cognitive bias in which the sequence of presented items influences users' trust and, critically, the quality of their feedback.<n>To clarify the interplay between order effects and explanatory interaction, we ran two larger-scale user studies designed to mimic common XIL tasks.
arXiv Detail & Related papers (2025-12-04T12:59:54Z) - Rethinking Reward Miscalibration of GRPO in Agentic RL [18.495499496405635]
We show that outcome based reward ensures expected negative advantage for those flawed middle steps.<n>We propose training the actor to classify good or bad actions to separate the embedding of good/bad actions.
arXiv Detail & Related papers (2025-09-28T13:24:38Z) - The Disparate Benefits of Deep Ensembles [11.303233667605586]
Algorithmic fairness examines how a model's performance varies across socially relevant groups.<n>Deep Ensembles unevenly favor different groups.<n>Per-group differences in predictive diversity of ensemble members can explain this effect.
arXiv Detail & Related papers (2024-10-17T17:53:01Z) - Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.<n>As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.<n>We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Explaining Knock-on Effects of Bias Mitigation [13.46387356280467]
In machine learning systems, bias mitigation approaches aim to make outcomes fairer across privileged and unprivileged groups.
In this paper, we aim to characterise impacted cohorts when mitigation interventions are applied.
We examine a range of bias mitigation strategies that work at various stages of the model life cycle.
We show that all tested mitigation strategies negatively impact a non-trivial fraction of cases, i.e., people who receive unfavourable outcomes solely on account of mitigation efforts.
arXiv Detail & Related papers (2023-12-01T18:40:37Z) - Flexible social inference facilitates targeted social learning when
rewards are not observable [58.762004496858836]
Groups coordinate more effectively when individuals are able to learn from others' successes.
We suggest that social inference capacities may help bridge this gap, allowing individuals to update their beliefs about others' underlying knowledge and success from observable trajectories of behavior.
arXiv Detail & Related papers (2022-12-01T21:04:03Z) - Fair Effect Attribution in Parallel Online Experiments [57.13281584606437]
A/B tests serve the purpose of reliably identifying the effect of changes introduced in online services.
It is common for online platforms to run a large number of simultaneous experiments by splitting incoming user traffic randomly.
Despite a perfect randomization between different groups, simultaneous experiments can interact with each other and create a negative impact on average population outcomes.
arXiv Detail & Related papers (2022-10-15T17:15:51Z) - What's the Harm? Sharp Bounds on the Fraction Negatively Affected by
Treatment [58.442274475425144]
We develop a robust inference algorithm that is efficient almost regardless of how and how fast these functions are learned.
We demonstrate our method in simulation studies and in a case study of career counseling for the unemployed.
arXiv Detail & Related papers (2022-05-20T17:36:33Z) - Fair Machine Learning Under Partial Compliance [22.119168255562897]
We propose a simple model of an employment market, leveraging simulation as a tool to explore the impact of both interaction effects and incentive effects on outcomes and auditing metrics.
Our key findings are that at equilibrium, partial compliance (k% of employers) can result in far less than proportional (k%) progress towards the full compliance outcomes.
arXiv Detail & Related papers (2020-11-07T01:46:53Z) - Learning "What-if" Explanations for Sequential Decision-Making [92.8311073739295]
Building interpretable parameterizations of real-world decision-making on the basis of demonstrated behavior is essential.
We propose learning explanations of expert decisions by modeling their reward function in terms of preferences with respect to "what if" outcomes.
We highlight the effectiveness of our batch, counterfactual inverse reinforcement learning approach in recovering accurate and interpretable descriptions of behavior.
arXiv Detail & Related papers (2020-07-02T14:24:17Z) - Loss aversion fosters coordination among independent reinforcement
learners [0.0]
We study what are the factors that can accelerate the emergence of collaborative behaviours among independent selfish learning agents.
We model two versions of the game with independent reinforcement learning agents.
We prove experimentally the introduction of loss aversion fosters cooperation by accelerating its appearance.
arXiv Detail & Related papers (2019-12-29T11:22:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.