Participatory Problem Formulation for Fairer Machine Learning Through
Community Based System Dynamics
- URL: http://arxiv.org/abs/2005.07572v3
- Date: Fri, 22 May 2020 13:57:28 GMT
- Title: Participatory Problem Formulation for Fairer Machine Learning Through
Community Based System Dynamics
- Authors: Donald Martin Jr. (1), Vinodkumar Prabhakaran (1), Jill Kuhlberg (2),
Andrew Smart (1), William S. Isaac (3) ((1) Google (2) System Stars (3)
DeepMind)
- Abstract summary: The problem formulation phase of ML system development can be a key source of bias that has significant downstream impacts on ML system fairness outcomes.
Current practice neither accounts for the dynamic complexity of high-stakes domains nor incorporates the perspectives of vulnerable stakeholders.
We introduce community based system dynamics (CBSD) as an approach to enable the participation of typically excluded stakeholders in the problem formulation phase of the ML system development process.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research on algorithmic fairness has highlighted that the problem
formulation phase of ML system development can be a key source of bias that has
significant downstream impacts on ML system fairness outcomes. However, very
little attention has been paid to methods for improving the fairness efficacy
of this critical phase of ML system development. Current practice neither
accounts for the dynamic complexity of high-stakes domains nor incorporates the
perspectives of vulnerable stakeholders. In this paper we introduce community
based system dynamics (CBSD) as an approach to enable the participation of
typically excluded stakeholders in the problem formulation phase of the ML
system development process and facilitate the deep problem understanding
required to mitigate bias during this crucial stage.
Related papers
- Towards a Science of Collective AI: LLM-based Multi-Agent Systems Need a Transition from Blind Trial-and-Error to Rigorous Science [70.3658845234978]
Large Language Models (LLMs) have greatly extended the capabilities of Multi-Agent Systems (MAS)<n>Despite this rapid progress, the field still relies heavily on empirical trial-and-error.<n>This bottleneck stems from the ambiguity of attribution.<n>We propose a factor attribution paradigm to systematically identify collaboration-driving factors.
arXiv Detail & Related papers (2026-02-05T04:19:52Z) - On the Societal Impact of Machine Learning [8.200229795326445]
This thesis investigates the societal impact of machine learning (ML)<n>ML increasingly informs consequential decisions and recommendations, significantly affecting many aspects of our lives.<n>As these data-driven systems are often developed without explicit fairness considerations, they carry the risk of discriminatory effects.
arXiv Detail & Related papers (2025-10-27T17:59:48Z) - Discrete Tokenization for Multimodal LLMs: A Comprehensive Survey [69.45421620616486]
This work presents the first structured taxonomy and analysis of discrete tokenization methods designed for large language models (LLMs)<n>We categorize 8 representative VQ variants that span classical and modern paradigms and analyze their algorithmic principles, training dynamics, and integration challenges with LLM pipelines.<n>We identify key challenges including codebook collapse, unstable gradient estimation, and modality-specific encoding constraints.
arXiv Detail & Related papers (2025-07-21T10:52:14Z) - Fairness Research For Machine Learning Should Integrate Societal Considerations [5.6168844664788855]
We argue that the significance of properly defined fairness measures remains underestimated.<n>The reasons include that detecting discrimination is critical due to the widespread deployment of ML systems.<n>Human-AI feedback loops amplify biases, even when only small social and political biases persist.
arXiv Detail & Related papers (2025-06-14T15:54:45Z) - A Call for New Recipes to Enhance Spatial Reasoning in MLLMs [85.67171333213301]
Multimodal Large Language Models (MLLMs) have demonstrated impressive performance in general vision-language tasks.
Recent studies have exposed critical limitations in their spatial reasoning capabilities.
This deficiency in spatial reasoning significantly constrains MLLMs' ability to interact effectively with the physical world.
arXiv Detail & Related papers (2025-04-21T11:48:39Z) - Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models [91.24296813969003]
This paper advocates integrating causal methods into machine learning to navigate the trade-offs among key principles of trustworthy ML.
We argue that a causal approach is essential for balancing multiple competing objectives in both trustworthy ML and foundation models.
arXiv Detail & Related papers (2025-02-28T14:57:33Z) - MAFE: Multi-Agent Fair Environments for Decision-Making Systems [30.91792275900066]
We introduce the concept of a Multi-Agent Fair Environment (MAFE) and present and analyze three MAFEs that model distinct social systems.
Experimental results demonstrate the utility of our MAFEs as testbeds for developing multi-agent fair algorithms.
arXiv Detail & Related papers (2025-02-25T04:03:50Z) - Position: Towards a Responsible LLM-empowered Multi-Agent Systems [22.905804138387854]
The rise of Agent AI and Large Language Model-powered Multi-Agent Systems (LLM-MAS) has underscored the need for responsible and dependable system operation.
These advancements introduce critical challenges: LLM agents exhibit inherent unpredictability, and uncertainties in their outputs can compound, threatening system stability.
To address these risks, a human-centered design approach with active dynamic moderation is essential.
arXiv Detail & Related papers (2025-02-03T16:04:30Z) - Model-free learning of probability flows: Elucidating the nonequilibrium dynamics of flocking [15.238808518078567]
High dimensionality of the phase space renders traditional computational techniques infeasible for estimating the entropy production rate.
We derive a new physical connection between the probability current and two local definitions of the EPR for inertial systems.
Our results highlight that entropy is consumed on the spatial interface of a flock as the interplay between alignment and fluctuation dynamically creates and annihilates order.
arXiv Detail & Related papers (2024-11-21T17:08:06Z) - Distribution-Aware Compensation Design for Sustainable Data Rights in Machine Learning [6.322978909154803]
We propose an innovative mechanism that views this challenge through the lens of game theory.
Our approach quantifies the ripple effects of data removal through a comprehensive analytical model.
We establish mathematical foundations for measuring participant utility and system outcomes.
arXiv Detail & Related papers (2024-10-19T09:04:13Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Fairness: from the ethical principle to the practice of Machine Learning
development as an ongoing agreement with stakeholders [0.0]
This paper clarifies why bias cannot be completely mitigated in Machine Learning (ML)
It proposes an end-to-end methodology to translate the ethical principle of justice and fairness into the practice of ML development.
arXiv Detail & Related papers (2023-03-22T20:58:32Z) - Concrete Safety for ML Problems: System Safety for ML Development and
Assessment [0.758305251912708]
Concerns of trustworthiness, unintended social harms, and unacceptable social and ethical violations undermine the promise of ML advancements.
Systems safety engineering is an established discipline with a proven track record of identifying and managing risks even in high-complexity sociotechnical systems.
arXiv Detail & Related papers (2023-02-06T18:02:07Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Contingency-Aware Influence Maximization: A Reinforcement Learning
Approach [52.109536198330126]
influence (IM) problem aims at finding a subset of seed nodes in a social network that maximize the spread of influence.
In this study, we focus on a sub-class of IM problems, where whether the nodes are willing to be the seeds when being invited is uncertain, called contingency-aware IM.
Despite the initial success, a major practical obstacle in promoting the solutions to more communities is the tremendous runtime of the greedy algorithms.
arXiv Detail & Related papers (2021-06-13T16:42:22Z) - An Empirical Comparison of Bias Reduction Methods on Real-World Problems
in High-Stakes Policy Settings [13.037143215464132]
We investigate the performance of several methods that operate at different points in the machine learning pipeline across four real-world public policy and social good problems.
We find a wide degree of variability and inconsistency in the ability of many of these methods to improve model fairness, but post-processing by choosing group-specific score thresholds consistently removes disparities.
arXiv Detail & Related papers (2021-05-13T17:33:28Z) - Self-organizing Democratized Learning: Towards Large-scale Distributed
Learning Systems [71.14339738190202]
democratized learning (Dem-AI) lays out a holistic philosophy with underlying principles for building large-scale distributed and democratized machine learning systems.
Inspired by Dem-AI philosophy, a novel distributed learning approach is proposed in this paper.
The proposed algorithms demonstrate better results in the generalization performance of learning models in agents compared to the conventional FL algorithms.
arXiv Detail & Related papers (2020-07-07T08:34:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.