Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs
- URL: http://arxiv.org/abs/2403.04226v2
- Date: Fri, 20 Dec 2024 19:22:27 GMT
- Title: Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs
- Authors: Sina Fazelpour,
- Abstract summary: Two prominent trade-offs in artificial intelligence are between predictive accuracy and fairness, and between predictive accuracy and interpretability.
prevailing interpretation views these formal trade-offs as directly corresponding to tensions between underlying social values.
I introduce a sociotechnical approach to examining the value implications of trade-offs.
- Score: 0.0
- License:
- Abstract: This paper examines two prominent formal trade-offs in artificial intelligence (AI) -- between predictive accuracy and fairness, and between predictive accuracy and interpretability. These trade-offs have become a central focus in normative and regulatory discussions as policymakers seek to understand the value tensions that can arise in the social adoption of AI tools. The prevailing interpretation views these formal trade-offs as directly corresponding to tensions between underlying social values, implying unavoidable conflicts between those social objectives. In this paper, I challenge that prevalent interpretation by introducing a sociotechnical approach to examining the value implications of trade-offs. Specifically, I identify three key considerations -- validity and instrumental relevance, compositionality, and dynamics -- for contextualizing and characterizing these implications. These considerations reveal that the relationship between model trade-offs and corresponding values depends on critical choices and assumptions. Crucially, judicious sacrifices in one model property for another can, in fact, promote both sets of corresponding values. The proposed sociotechnical perspective thus shows that we can and should aspire to higher epistemic and ethical possibilities than the prevalent interpretation suggests, while offering practical guidance for achieving those outcomes. Finally, I draw out the broader implications of this perspective for AI design and governance, highlighting the need to broaden normative engagement across the AI lifecycle, develop legal and auditing tools sensitive to sociotechnical considerations, and rethink the vital role and appropriate structure of interdisciplinary collaboration in fostering a responsible AI workforce.
Related papers
- Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Queering the ethics of AI [0.6993026261767287]
The chapter emphasizes the ethical concerns surrounding the potential for AI to perpetuate discrimination.
The chapter argues that a critical examination of the conception of equality that often underpins non-discrimination law is necessary.
arXiv Detail & Related papers (2023-08-25T17:26:05Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - Stronger Together: on the Articulation of Ethical Charters, Legal Tools,
and Technical Documentation in ML [5.433040083728602]
The need for accountability of the people behind AI systems can be addressed by leveraging processes in three fields of study: ethics, law, and computer science.
We first contrast notions of compliance in the ethical, legal, and technical fields.
We then focus on the role of values in articulating the synergies between the fields.
arXiv Detail & Related papers (2023-05-09T15:35:31Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - A toolkit of dilemmas: Beyond debiasing and fairness formulas for
responsible AI/ML [0.0]
Approaches to fair and ethical AI have recently fallen under the scrutiny of the emerging field of critical data studies.
This paper advocates for a situated reasoning and creative engagement with the dilemmas surrounding responsible algorithmic/data-driven systems.
arXiv Detail & Related papers (2023-03-03T13:58:24Z) - Tensions Between the Proxies of Human Values in AI [20.303537771118048]
We argue that the AI community needs to consider all the consequences of choosing certain formulations of these pillars.
We point towards sociotechnical research for frameworks for the latter, but push for broader efforts into implementing these in practice.
arXiv Detail & Related papers (2022-12-14T21:13:48Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - Reasons, Values, Stakeholders: A Philosophical Framework for Explainable
Artificial Intelligence [0.0]
This paper offers a multi-faceted framework that brings more conceptual precision to the present debate.
It identifies the types of explanations that are most pertinent to artificial intelligence predictions.
It also recognizes the relevance and importance of social and ethical values for the evaluation of these explanations.
arXiv Detail & Related papers (2021-03-01T04:50:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.