Local Justice and the Algorithmic Allocation of Societal Resources
- URL: http://arxiv.org/abs/2112.01236v1
- Date: Wed, 10 Nov 2021 18:58:08 GMT
- Title: Local Justice and the Algorithmic Allocation of Societal Resources
- Authors: Sanmay Das
- Abstract summary: AI is increasingly used to aid decision-making about the allocation of scarce societal resources.
This paper lays out possible roles and opportunities for AI in this domain.
It argues for a closer engagement with the political philosophy literature on local justice.
- Score: 12.335698325757491
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI is increasingly used to aid decision-making about the allocation of scarce
societal resources, for example housing for homeless people, organs for
transplantation, and food donations. Recently, there have been several
proposals for how to design objectives for these systems that attempt to
achieve some combination of fairness, efficiency, incentive compatibility, and
satisfactory aggregation of stakeholder preferences. This paper lays out
possible roles and opportunities for AI in this domain, arguing for a closer
engagement with the political philosophy literature on local justice, which
provides a framework for thinking about how societies have over time framed
objectives for such allocation problems. It also discusses how we may be able
to integrate into this framework the opportunities and risks opened up by the
ubiquity of data and the availability of algorithms that can use them to make
accurate predictions about the future.
Related papers
- AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.
The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - An Overview of Large Language Models for Statisticians [109.38601458831545]
Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence (AI)
This paper explores potential areas where statisticians can make important contributions to the development of LLMs.
We focus on issues such as uncertainty quantification, interpretability, fairness, privacy, watermarking and model adaptation.
arXiv Detail & Related papers (2025-02-25T03:40:36Z) - Towards Responsible Governing AI Proliferation [0.0]
The paper introduces the Proliferation' paradigm, which anticipates the rise of smaller, decentralized, open-sourced AI models.
It posits that these developments are both probable and likely to introduce both benefits and novel risks.
arXiv Detail & Related papers (2024-12-18T13:10:35Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Quantifying the Cross-sectoral Intersecting Discrepancies within Multiple Groups Using Latent Class Analysis Towards Fairness [6.683051393349788]
This research introduces an innovative approach to quantify cross-sectoral intersecting discrepancies.
We validate our approach using both proprietary and public datasets.
Our findings reveal significant discrepancies between minority ethnic groups, highlighting the need for targeted interventions in real-world AI applications.
arXiv Detail & Related papers (2024-05-24T08:10:31Z) - ABI Approach: Automatic Bias Identification in Decision-Making Under Risk based in an Ontology of Behavioral Economics [46.57327530703435]
Risk seeking preferences for losses, driven by biases such as loss aversion, pose challenges and can result in severe negative consequences.
This research introduces the ABI approach, a novel solution designed to support organizational decision-makers by automatically identifying and explaining risk seeking preferences.
arXiv Detail & Related papers (2024-05-22T23:53:46Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Achievement and Fragility of Long-term Equitability [3.04585143845864]
We investigate how to allocate limited resources to locally interacting communities in a way to maximize a notion of equitability.
We employ recent mathematical tools stemming from data-driven feedback online optimization.
We design dynamic policies that converge to an allocation that maximize equitability in the long term.
arXiv Detail & Related papers (2022-06-24T15:04:49Z) - Justice in Misinformation Detection Systems: An Analysis of Algorithms,
Stakeholders, and Potential Harms [2.5372245630249632]
We show how injustices materialize for stakeholders across three algorithmic stages in the misinformation detection pipeline.
This framework should help researchers, policymakers, and practitioners reason about potential harms or risks associated with algorithmic misinformation detection.
arXiv Detail & Related papers (2022-04-28T15:31:13Z) - Algorithmic Fairness Datasets: the Story so Far [68.45921483094705]
Data-driven algorithms are studied in diverse domains to support critical decisions, directly impacting people's well-being.
A growing community of researchers has been investigating the equity of existing algorithms and proposing novel ones, advancing the understanding of risks and opportunities of automated decision-making for historically disadvantaged populations.
Progress in fair Machine Learning hinges on data, which can be appropriately used only if adequately documented.
Unfortunately, the algorithmic fairness community suffers from a collective data documentation debt caused by a lack of information on specific resources (opacity) and scatteredness of available information (sparsity)
arXiv Detail & Related papers (2022-02-03T17:25:46Z) - Towards a Fairness-Aware Scoring System for Algorithmic Decision-Making [35.21763166288736]
We propose a general framework to create data-driven fairness-aware scoring systems.
We show that the proposed framework provides practitioners or policymakers great flexibility to select their desired fairness requirements.
arXiv Detail & Related papers (2021-09-21T09:46:35Z) - Distributive Justice and Fairness Metrics in Automated Decision-making:
How Much Overlap Is There? [0.0]
We show that metrics implementing equality of opportunity only apply when resource allocations are based on deservingness, but fail when allocations should reflect concerns about egalitarianism, sufficiency, and priority.
We argue that by cleanly distinguishing between prediction tasks and decision tasks, research on fair machine learning could take better advantage of the rich literature on distributive justice.
arXiv Detail & Related papers (2021-05-04T12:09:26Z) - Decentralized Reinforcement Learning: Global Decision-Making via Local
Economic Transactions [80.49176924360499]
We establish a framework for directing a society of simple, specialized, self-interested agents to solve sequential decision problems.
We derive a class of decentralized reinforcement learning algorithms.
We demonstrate the potential advantages of a society's inherent modular structure for more efficient transfer learning.
arXiv Detail & Related papers (2020-07-05T16:41:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.