Disciplining deliberation: a sociotechnical perspective on machine
learning trade-offs
- URL: http://arxiv.org/abs/2403.04226v1
- Date: Thu, 7 Mar 2024 05:03:18 GMT
- Title: Disciplining deliberation: a sociotechnical perspective on machine
learning trade-offs
- Authors: Sina Fazelpour
- Abstract summary: This paper focuses on two highly publicized formal trade-offs in the field of responsible artificial intelligence (AI)
I show how neglecting these considerations can distort our normative deliberations, and result in costly and misaligned interventions and justifications.
I end by drawing out the normative opportunities and challenges that emerge out of these considerations, and highlighting the imperative of interdisciplinary collaboration in fostering responsible AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper focuses on two highly publicized formal trade-offs in the field of
responsible artificial intelligence (AI) -- between predictive accuracy and
fairness and between predictive accuracy and interpretability. These formal
trade-offs are often taken by researchers, practitioners, and policy-makers to
directly imply corresponding tensions between underlying values. Thus
interpreted, the trade-offs have formed a core focus of normative engagement in
AI governance, accompanied by a particular division of labor along disciplinary
lines. This paper argues against this prevalent interpretation by drawing
attention to three sets of considerations that are critical for bridging the
gap between these formal trade-offs and their practical impacts on relevant
values. I show how neglecting these considerations can distort our normative
deliberations, and result in costly and misaligned interventions and
justifications. Taken together, these considerations form a sociotechnical
framework that could guide those involved in AI governance to assess how, in
many cases, we can and should have higher aspirations than the prevalent
interpretation of the trade-offs would suggest. I end by drawing out the
normative opportunities and challenges that emerge out of these considerations,
and highlighting the imperative of interdisciplinary collaboration in fostering
responsible AI.
Related papers
- Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Queering the ethics of AI [0.6993026261767287]
The chapter emphasizes the ethical concerns surrounding the potential for AI to perpetuate discrimination.
The chapter argues that a critical examination of the conception of equality that often underpins non-discrimination law is necessary.
arXiv Detail & Related papers (2023-08-25T17:26:05Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - Stronger Together: on the Articulation of Ethical Charters, Legal Tools,
and Technical Documentation in ML [5.433040083728602]
The need for accountability of the people behind AI systems can be addressed by leveraging processes in three fields of study: ethics, law, and computer science.
We first contrast notions of compliance in the ethical, legal, and technical fields.
We then focus on the role of values in articulating the synergies between the fields.
arXiv Detail & Related papers (2023-05-09T15:35:31Z) - Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects [21.133468554780404]
We focus on two-sided interactions, drawing on support spread across a diverse literature.
This catalogue can be helpful in raising awareness of the possible interactions between aspects of ethics principles.
arXiv Detail & Related papers (2023-04-17T13:43:13Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Tensions Between the Proxies of Human Values in AI [20.303537771118048]
We argue that the AI community needs to consider all the consequences of choosing certain formulations of these pillars.
We point towards sociotechnical research for frameworks for the latter, but push for broader efforts into implementing these in practice.
arXiv Detail & Related papers (2022-12-14T21:13:48Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.