Machine Learning Should Maximize Welfare, Not (Only) Accuracy
- URL: http://arxiv.org/abs/2502.11981v1
- Date: Mon, 17 Feb 2025 16:22:46 GMT
- Title: Machine Learning Should Maximize Welfare, Not (Only) Accuracy
- Authors: Nir Rosenfeld, Haifeng Xu,
- Abstract summary: We argue that machine learning is currently missing, and can gain much from incorporating a proper notion of social welfare.
Rather than disposing of prediction, we aim to leverage this forte of machine learning for promoting social welfare.
- Score: 43.42518176927683
- License:
- Abstract: Decades of research in machine learning have given us powerful tools for making accurate predictions. But when used in social settings and on human inputs, better accuracy does not immediately translate to better social outcomes. This may not be surprising given that conventional learning frameworks are not designed to express societal preferences -- let alone promote them. This position paper argues that machine learning is currently missing, and can gain much from incorporating, a proper notion of social welfare. The field of welfare economics asks: how should we allocate limited resources to self-interested agents in a way that maximizes social benefit? We argue that this perspective applies to many modern applications of machine learning in social contexts, and advocate for its adoption. Rather than disposing of prediction, we aim to leverage this forte of machine learning for promoting social welfare. We demonstrate this idea by proposing a conceptual framework that gradually transitions from accuracy maximization (with awareness to welfare) to welfare maximization (via accurate prediction). We detail applications and use-cases for which our framework can be effective, identify technical challenges and practical opportunities, and highlight future avenues worth pursuing.
Related papers
- AI and Social Theory [0.0]
We sketch a programme for AI driven social theory, starting by defining what we mean by artificial intelligence (AI)
We then lay out our model for how AI based models can draw on the growing availability of digital data to help test the validity of different social theories based on their predictive power.
arXiv Detail & Related papers (2024-07-07T12:26:16Z) - Social Skill Training with Large Language Models [65.40795606463101]
People rely on social skills like conflict resolution to communicate effectively and to thrive in both work and personal life.
This perspective paper identifies social skill barriers to enter specialized fields.
We present a solution that leverages large language models for social skill training via a generic framework.
arXiv Detail & Related papers (2024-04-05T16:29:58Z) - The Relative Value of Prediction in Algorithmic Decision Making [0.0]
We ask: What is the relative value of prediction in algorithmic decision making?
We identify simple, sharp conditions determining the relative value of prediction vis-a-vis expanding access.
We illustrate how these theoretical insights may be used to guide the design of algorithmic decision making systems in practice.
arXiv Detail & Related papers (2023-12-13T20:52:45Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Socially-Optimal Mechanism Design for Incentivized Online Learning [32.55657244414989]
Multi-arm bandit (MAB) is a classic online learning framework that studies the sequential decision-making in an uncertain environment.
It is a practically important scenario in many applications such as spectrum sharing, crowdsensing, and edge computing.
This paper establishes the incentivized online learning (IOL) framework for this scenario.
arXiv Detail & Related papers (2021-12-29T00:21:40Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Envisioning Communities: A Participatory Approach Towards AI for Social
Good [10.504838259488844]
We argue that AI for social good ought to be assessed by the communities that the AI system will impact.
We show how the capabilities approach aligns with a participatory approach for the design and implementation of AI for social good research.
arXiv Detail & Related papers (2021-05-04T21:40:04Z) - The Use and Misuse of Counterfactuals in Ethical Machine Learning [2.28438857884398]
We argue for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender.
We conclude that the counterfactual approach in machine learning fairness and social explainability can require an incoherent theory of what social categories are.
arXiv Detail & Related papers (2021-02-09T19:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.