Keeping Community in the Loop: Understanding Wikipedia Stakeholder
Values for Machine Learning-Based Systems
- URL: http://arxiv.org/abs/2001.04879v1
- Date: Tue, 14 Jan 2020 16:30:25 GMT
- Title: Keeping Community in the Loop: Understanding Wikipedia Stakeholder
Values for Machine Learning-Based Systems
- Authors: C. Estelle Smith, Bowen Yu, Anjali Srivastava, Aaron Halfaker, Loren
Terveen, Haiyi Zhu
- Abstract summary: We take a Value-Sensitive Algorithm Design approach to understanding a community-created and -maintained machine learning-based algorithm called the Objective Revision Evaluation System (ORES)
ORES is a quality prediction system used in numerous Wikipedia applications and contexts.
- Score: 14.808971334949119
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: On Wikipedia, sophisticated algorithmic tools are used to assess the quality
of edits and take corrective actions. However, algorithms can fail to solve the
problems they were designed for if they conflict with the values of communities
who use them. In this study, we take a Value-Sensitive Algorithm Design
approach to understanding a community-created and -maintained machine
learning-based algorithm called the Objective Revision Evaluation System
(ORES)---a quality prediction system used in numerous Wikipedia applications
and contexts. Five major values converged across stakeholder groups that ORES
(and its dependent applications) should: (1) reduce the effort of community
maintenance, (2) maintain human judgement as the final authority, (3) support
differing peoples' differing workflows, (4) encourage positive engagement with
diverse editor groups, and (5) establish trustworthiness of people and
algorithms within the community. We reveal tensions between these values and
discuss implications for future research to improve algorithms like ORES.
Related papers
- Enhancing Community Detection in Networks: A Comparative Analysis of Local Metrics and Hierarchical Algorithms [49.1574468325115]
This study employs the same method to evaluate the relevance of using local similarity metrics for community detection.
The efficacy of these metrics was evaluated by applying the base algorithm to several real networks with varying community sizes.
arXiv Detail & Related papers (2024-08-17T02:17:09Z) - Relevance-aware Algorithmic Recourse [3.6141428739228894]
Algorithmic recourse emerges as a tool for clarifying decisions made by predictive models.
Current algorithmic recourse methods treat all domain values equally, which is unrealistic in real-world settings.
We propose a novel framework, Relevance-Aware Algorithmic Recourse (RAAR), that leverages the concept of relevance in applying algorithmic recourse to regression tasks.
arXiv Detail & Related papers (2024-05-29T13:25:49Z) - Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool [0.9821874476902969]
We show that stakeholders from different groups articulate diverse problem diagnoses of the tool's algorithmic bias.
We find that stakeholders use evidence of algorithmic bias to reform the policies around police patrol allocation.
We identify the implicit assumptions and scope of these varied uses of algorithmic bias as evidence.
arXiv Detail & Related papers (2024-05-13T13:03:33Z) - On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms [56.119374302685934]
There have been severe concerns over the trustworthiness of AI technologies.
Machine and deep learning algorithms depend heavily on the data used during their development.
We propose a framework to evaluate the datasets through a responsible rubric.
arXiv Detail & Related papers (2023-10-24T14:01:53Z) - Homophily and Incentive Effects in Use of Algorithms [17.55279695774825]
We present a crowdsourcing vignette study designed to assess the impacts of two plausible factors on AI-informed decision-making.
First, we examine homophily -- do people defer more to models that tend to agree with them?
Second, we consider incentives -- how do people incorporate a (known) cost structure in the hybrid decision-making setting?
arXiv Detail & Related papers (2022-05-19T17:11:04Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness [92.26039686430204]
We show that even in carefully-designed systems, complementary performance can be elusive.
First, we provide a theoretical framework for modeling simple human-algorithm systems.
Next, we use this model to prove conditions where complementarity is impossible.
arXiv Detail & Related papers (2022-02-17T18:44:41Z) - Algorithmic Fairness Datasets: the Story so Far [68.45921483094705]
Data-driven algorithms are studied in diverse domains to support critical decisions, directly impacting people's well-being.
A growing community of researchers has been investigating the equity of existing algorithms and proposing novel ones, advancing the understanding of risks and opportunities of automated decision-making for historically disadvantaged populations.
Progress in fair Machine Learning hinges on data, which can be appropriately used only if adequately documented.
Unfortunately, the algorithmic fairness community suffers from a collective data documentation debt caused by a lack of information on specific resources (opacity) and scatteredness of available information (sparsity)
arXiv Detail & Related papers (2022-02-03T17:25:46Z) - Coping with Mistreatment in Fair Algorithms [1.2183405753834557]
We study the algorithmic fairness in a supervised learning setting and examine the effect of optimizing a classifier for the Equal Opportunity metric.
We propose a conceptually simple method to mitigate this bias.
We rigorously analyze the proposed method and evaluate it on several real world datasets demonstrating its efficacy.
arXiv Detail & Related papers (2021-02-22T03:26:06Z) - Online Learning Demands in Max-min Fairness [91.37280766977923]
We describe mechanisms for the allocation of a scarce resource among multiple users in a way that is efficient, fair, and strategy-proof.
The mechanism is repeated for multiple rounds and a user's requirements can change on each round.
At the end of each round, users provide feedback about the allocation they received, enabling the mechanism to learn user preferences over time.
arXiv Detail & Related papers (2020-12-15T22:15:20Z) - Examining the Impact of Algorithm Awareness on Wikidata's Recommender
System Recoin [12.167153941840958]
We conduct online experiments with 105 participants using MTurk for the recommender system Recoin, a gadget for Wikidata.
Our findings include a positive correlation between comprehension of and trust in an algorithmic system in our interactive redesign.
Our results are not conclusive yet, and suggest that the measures of comprehension, fairness, accuracy and trust are not yet exhaustive for the empirical study of algorithm awareness.
arXiv Detail & Related papers (2020-09-18T20:06:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.