A relationship and not a thing: A relational approach to algorithmic
accountability and assessment documentation
- URL: http://arxiv.org/abs/2203.01455v1
- Date: Wed, 2 Mar 2022 23:22:03 GMT
- Title: A relationship and not a thing: A relational approach to algorithmic
accountability and assessment documentation
- Authors: Jacob Metcalf, Emanuel Moss, Ranjit Singh, Emnet Tafese, Elizabeth
Anne Watkins
- Abstract summary: We argue that developers largely have a monopoly on information about how their systems actually work.
We argue that robust accountability regimes must establish opportunities for publics to cohere around shared experiences and interests.
- Score: 3.4438724671481755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Central to a number of scholarly, regulatory, and public conversations about
algorithmic accountability is the question of who should have access to
documentation that reveals the inner workings, intended function, and
anticipated consequences of algorithmic systems, potentially establishing new
routes for impacted publics to contest the operations of these systems.
Currently, developers largely have a monopoly on information about how their
systems actually work and are incentivized to maintain their own ignorance
about aspects of how their systems affect the world. Increasingly, legislators,
regulators and advocates have turned to assessment documentation in order to
address the gap between the public's experience of algorithmic harms and the
obligations of developers to document and justify their design decisions.
However, issues of standing and expertise currently prevent publics from
cohering around shared interests in preventing and redressing algorithmic
harms; as we demonstrate with multiple cases, courts often find computational
harms non-cognizable and rarely require developers to address material claims
of harm. Constructed with a triadic accountability relationship, algorithmic
impact assessment regimes could alter this situation by establishing procedural
rights around public access to reporting and documentation. Developing a
relational approach to accountability, we argue that robust accountability
regimes must establish opportunities for publics to cohere around shared
experiences and interests, and to contest the outcomes of algorithmic systems
that affect their lives. Furthermore, algorithmic accountability policies
currently under consideration in many jurisdictions must provide the public
with adequate standing and opportunities to access and contest the
documentation provided by the actors and the judgments passed by the forum.
Related papers
- Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool [0.9821874476902969]
We show that stakeholders from different groups articulate diverse problem diagnoses of the tool's algorithmic bias.
We find that stakeholders use evidence of algorithmic bias to reform the policies around police patrol allocation.
We identify the implicit assumptions and scope of these varied uses of algorithmic bias as evidence.
arXiv Detail & Related papers (2024-05-13T13:03:33Z) - Analysing and Organising Human Communications for AI Fairness-Related Decisions: Use Cases from the Public Sector [0.0]
Communication issues between diverse stakeholders can lead to misinterpretation and misuse of AI algorithms.
We conduct interviews with practitioners working on algorithmic systems in the public sector.
We identify key elements of communication processes that underlie fairness-related human decisions.
arXiv Detail & Related papers (2024-03-20T14:20:42Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Algorithmic Fairness in Business Analytics: Directions for Research and
Practice [24.309795052068388]
This paper offers a forward-looking, BA-focused review of algorithmic fairness.
We first review the state-of-the-art research on sources and measures of bias, as well as bias mitigation algorithms.
We then provide a detailed discussion of the utility-fairness relationship, emphasizing that the frequent assumption of a trade-off between these two constructs is often mistaken or short-sighted.
arXiv Detail & Related papers (2022-07-22T10:21:38Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - The Conflict Between Explainable and Accountable Decision-Making
Algorithms [10.64167691614925]
Decision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired.
XAI initiative aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability.
This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems.
arXiv Detail & Related papers (2022-05-11T07:19:28Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - Justice in Misinformation Detection Systems: An Analysis of Algorithms,
Stakeholders, and Potential Harms [2.5372245630249632]
We show how injustices materialize for stakeholders across three algorithmic stages in the misinformation detection pipeline.
This framework should help researchers, policymakers, and practitioners reason about potential harms or risks associated with algorithmic misinformation detection.
arXiv Detail & Related papers (2022-04-28T15:31:13Z) - Algorithmic Fairness Datasets: the Story so Far [68.45921483094705]
Data-driven algorithms are studied in diverse domains to support critical decisions, directly impacting people's well-being.
A growing community of researchers has been investigating the equity of existing algorithms and proposing novel ones, advancing the understanding of risks and opportunities of automated decision-making for historically disadvantaged populations.
Progress in fair Machine Learning hinges on data, which can be appropriately used only if adequately documented.
Unfortunately, the algorithmic fairness community suffers from a collective data documentation debt caused by a lack of information on specific resources (opacity) and scatteredness of available information (sparsity)
arXiv Detail & Related papers (2022-02-03T17:25:46Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.