Justice in Misinformation Detection Systems: An Analysis of Algorithms,
Stakeholders, and Potential Harms
- URL: http://arxiv.org/abs/2204.13568v2
- Date: Fri, 29 Apr 2022 15:02:59 GMT
- Title: Justice in Misinformation Detection Systems: An Analysis of Algorithms,
Stakeholders, and Potential Harms
- Authors: Terrence Neumann and Maria De-Arteaga and Sina Fazelpour
- Abstract summary: We show how injustices materialize for stakeholders across three algorithmic stages in the misinformation detection pipeline.
This framework should help researchers, policymakers, and practitioners reason about potential harms or risks associated with algorithmic misinformation detection.
- Score: 2.5372245630249632
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Faced with the scale and surge of misinformation on social media, many
platforms and fact-checking organizations have turned to algorithms for
automating key parts of misinformation detection pipelines. While offering a
promising solution to the challenge of scale, the ethical and societal risks
associated with algorithmic misinformation detection are not well-understood.
In this paper, we employ and extend upon the notion of informational justice to
develop a framework for explicating issues of justice relating to
representation, participation, distribution of benefits and burdens, and
credibility in the misinformation detection pipeline. Drawing on the framework:
(1) we show how injustices materialize for stakeholders across three
algorithmic stages in the pipeline; (2) we suggest empirical measures for
assessing these injustices; and (3) we identify potential sources of these
harms. This framework should help researchers, policymakers, and practitioners
reason about potential harms or risks associated with these algorithms and
provide conceptual guidance for the design of algorithmic fairness audits in
this domain.
Related papers
- An Information-Flow Perspective on Algorithmic Fairness [0.951828574518325]
This work presents insights gained by investigating the relationship between algorithmic fairness and the concept of secure information flow.
We derive a new quantitative notion of fairness called fairness spread, which can be easily analyzed using quantitative information flow.
arXiv Detail & Related papers (2023-12-15T14:46:36Z) - The right to audit and power asymmetries in algorithm auditing [68.8204255655161]
We elaborate on the challenges and asymmetries mentioned by Sandvig at the IC2S2 2021.
We also contribute a discussion of the asymmetries that were not covered by Sandvig.
We discuss the implications these asymmetries have for algorithm auditing research.
arXiv Detail & Related papers (2023-02-16T13:57:41Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Algorithmic Fairness in Business Analytics: Directions for Research and
Practice [24.309795052068388]
This paper offers a forward-looking, BA-focused review of algorithmic fairness.
We first review the state-of-the-art research on sources and measures of bias, as well as bias mitigation algorithms.
We then provide a detailed discussion of the utility-fairness relationship, emphasizing that the frequent assumption of a trade-off between these two constructs is often mistaken or short-sighted.
arXiv Detail & Related papers (2022-07-22T10:21:38Z) - Outsider Oversight: Designing a Third Party Audit Ecosystem for AI
Governance [3.8997087223115634]
We discuss the challenges of third party oversight in the current AI landscape.
We show that the institutional design of such audits are far from monolithic.
We conclude that the turn toward audits alone is unlikely to achieve actual algorithmic accountability.
arXiv Detail & Related papers (2022-06-09T19:18:47Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - A relationship and not a thing: A relational approach to algorithmic
accountability and assessment documentation [3.4438724671481755]
We argue that developers largely have a monopoly on information about how their systems actually work.
We argue that robust accountability regimes must establish opportunities for publics to cohere around shared experiences and interests.
arXiv Detail & Related papers (2022-03-02T23:22:03Z) - Algorithmic Fairness Datasets: the Story so Far [68.45921483094705]
Data-driven algorithms are studied in diverse domains to support critical decisions, directly impacting people's well-being.
A growing community of researchers has been investigating the equity of existing algorithms and proposing novel ones, advancing the understanding of risks and opportunities of automated decision-making for historically disadvantaged populations.
Progress in fair Machine Learning hinges on data, which can be appropriately used only if adequately documented.
Unfortunately, the algorithmic fairness community suffers from a collective data documentation debt caused by a lack of information on specific resources (opacity) and scatteredness of available information (sparsity)
arXiv Detail & Related papers (2022-02-03T17:25:46Z) - Mitigating Bias in Algorithmic Systems -- A Fish-Eye View [8.19357693559909]
This survey provides a "fish-eye view," examining approaches across four areas of research.
The literature describes three steps toward a comprehensive treatment -- bias detection, fairness management and explainability management.
arXiv Detail & Related papers (2021-03-31T10:14:28Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.