Crowdsourcing Impacts: Exploring the Utility of Crowds for Anticipating
Societal Impacts of Algorithmic Decision Making
- URL: http://arxiv.org/abs/2207.09525v1
- Date: Tue, 19 Jul 2022 19:46:53 GMT
- Title: Crowdsourcing Impacts: Exploring the Utility of Crowds for Anticipating
Societal Impacts of Algorithmic Decision Making
- Authors: Julia Barnett and Nicholas Diakopoulos
- Abstract summary: We employ crowdsourcing to uncover different types of impact areas based on a set of governmental algorithmic decision making tools.
Our findings suggest that this method is effective at leveraging the cognitive diversity of the crowd to uncover a range of issues.
- Score: 7.068913546756094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing pervasiveness of algorithms across industry and
government, a growing body of work has grappled with how to understand their
societal impact and ethical implications. Various methods have been used at
different stages of algorithm development to encourage researchers and
designers to consider the potential societal impact of their research. An
understudied yet promising area in this realm is using participatory foresight
to anticipate these different societal impacts. We employ crowdsourcing as a
means of participatory foresight to uncover four different types of impact
areas based on a set of governmental algorithmic decision making tools: (1)
perceived valence, (2) societal domains, (3) specific abstract impact types,
and (4) ethical algorithm concerns. Our findings suggest that this method is
effective at leveraging the cognitive diversity of the crowd to uncover a range
of issues. We further analyze the complexities within the interaction of the
impact areas identified to demonstrate how crowdsourcing can illuminate
patterns around the connections between impacts. Ultimately this work
establishes crowdsourcing as an effective means of anticipating algorithmic
impact which complements other approaches towards assessing algorithms in
society by leveraging participatory foresight and cognitive diversity.
Related papers
- Human Decision-making is Susceptible to AI-driven Manipulation [71.20729309185124]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.
This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - Statistical Collusion by Collectives on Learning Platforms [49.1574468325115]
Collectives may seek to influence platforms to align with their own interests.
It is essential to understand the computations that collectives must perform to impact platforms in this way.
We develop a framework that provides a theoretical and algorithmic treatment of these issues.
arXiv Detail & Related papers (2025-02-07T12:36:23Z) - Towards Opinion Shaping: A Deep Reinforcement Learning Approach in Bot-User Interactions [2.85386288555414]
This paper explores the impact of interference in social network algorithms via user-bot interactions, focusing on theBounded Bounded Confidence Model (SBCM)
It integrates the Deep Deterministic Policy Gradient (DDPG) algorithm and its variants to experiment with different Deep Reinforcement Learning (DRL)
experimental results demonstrate that this approach can result in efficient opinion shaping, indicating its potential in deploying advertising resources on social platforms.
arXiv Detail & Related papers (2024-09-12T23:39:07Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Anticipating Impacts: Using Large-Scale Scenario Writing to Explore
Diverse Implications of Generative AI in the News Environment [3.660182910533372]
We aim to broaden the perspective and capture the expectations of three stakeholder groups about the potential negative impacts of generative AI.
We apply scenario writing and use participatory foresight to delve into cognitively diverse imaginations of the future.
We conclude by discussing the usefulness of scenario-writing and participatory foresight as a toolbox for generative AI impact assessment.
arXiv Detail & Related papers (2023-10-10T06:59:27Z) - Re-mine, Learn and Reason: Exploring the Cross-modal Semantic
Correlations for Language-guided HOI detection [57.13665112065285]
Human-Object Interaction (HOI) detection is a challenging computer vision task.
We present a framework that enhances HOI detection by incorporating structured text knowledge.
arXiv Detail & Related papers (2023-07-25T14:20:52Z) - Homophily and Incentive Effects in Use of Algorithms [17.55279695774825]
We present a crowdsourcing vignette study designed to assess the impacts of two plausible factors on AI-informed decision-making.
First, we examine homophily -- do people defer more to models that tend to agree with them?
Second, we consider incentives -- how do people incorporate a (known) cost structure in the hybrid decision-making setting?
arXiv Detail & Related papers (2022-05-19T17:11:04Z) - Detecting adversaries in Crowdsourcing [71.20185379303479]
This work investigates the effects of adversaries on crowdsourced classification, under the popular Dawid and Skene model.
The adversaries are allowed to deviate arbitrarily from the considered crowdsourcing model, and may potentially cooperate.
We develop an approach that leverages the structure of second-order moments of annotator responses, to identify large numbers of adversaries, and mitigate their impact on the crowdsourcing task.
arXiv Detail & Related papers (2021-10-07T15:07:07Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Dimensions of Diversity in Human Perceptions of Algorithmic Fairness [37.372078500394984]
We explore how people's perceptions of procedural algorithmic fairness relate to their demographics and personal experiences.
Political views and personal experience with the algorithmic decision context significantly influence perceptions about the fairness of using different features for bail decision-making.
arXiv Detail & Related papers (2020-05-02T11:59:39Z) - No computation without representation: Avoiding data and algorithm
biases through diversity [11.12971845021808]
We draw connections between the lack of diversity within academic and professional computing fields and the type and breadth of the biases encountered in datasets.
We use these lessons to develop recommendations that provide concrete steps for the computing community to increase diversity.
arXiv Detail & Related papers (2020-02-26T23:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.