Examining the Impact of Algorithm Awareness on Wikidata's Recommender
System Recoin
- URL: http://arxiv.org/abs/2009.09049v1
- Date: Fri, 18 Sep 2020 20:06:53 GMT
- Title: Examining the Impact of Algorithm Awareness on Wikidata's Recommender
System Recoin
- Authors: Jesse Josua Benjamin, Claudia M\"uller-Birn, Simon Razniewski
- Abstract summary: We conduct online experiments with 105 participants using MTurk for the recommender system Recoin, a gadget for Wikidata.
Our findings include a positive correlation between comprehension of and trust in an algorithmic system in our interactive redesign.
Our results are not conclusive yet, and suggest that the measures of comprehension, fairness, accuracy and trust are not yet exhaustive for the empirical study of algorithm awareness.
- Score: 12.167153941840958
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The global infrastructure of the Web, designed as an open and transparent
system, has a significant impact on our society. However, algorithmic systems
of corporate entities that neglect those principles increasingly populated the
Web. Typical representatives of these algorithmic systems are recommender
systems that influence our society both on a scale of global politics and
during mundane shopping decisions. Recently, such recommender systems have come
under critique for how they may strengthen existing or even generate new kinds
of biases. To this end, designers and engineers are increasingly urged to make
the functioning and purpose of recommender systems more transparent. Our
research relates to the discourse of algorithm awareness, that reconsiders the
role of algorithm visibility in interface design. We conducted online
experiments with 105 participants using MTurk for the recommender system
Recoin, a gadget for Wikidata. In these experiments, we presented users with
one of a set of three different designs of Recoin's user interface, each of
them exhibiting a varying degree of explainability and interactivity. Our
findings include a positive correlation between comprehension of and trust in
an algorithmic system in our interactive redesign. However, our results are not
conclusive yet, and suggest that the measures of comprehension, fairness,
accuracy and trust are not yet exhaustive for the empirical study of algorithm
awareness. Our qualitative insights provide a first indication for further
measures. Our study participants, for example, were less concerned with the
details of understanding an algorithmic calculation than with who or what is
judging the result of the algorithm.
Related papers
- Quantifying User Coherence: A Unified Framework for Cross-Domain Recommendation Analysis [69.37718774071793]
This paper introduces novel information-theoretic measures for understanding recommender systems.
We evaluate 7 recommendation algorithms across 9 datasets, revealing the relationships between our measures and standard performance metrics.
arXiv Detail & Related papers (2024-10-03T13:02:07Z) - Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems [3.990406494980651]
This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems.
By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration affect their recommendations.
arXiv Detail & Related papers (2024-09-10T23:58:27Z) - Evaluating Ensemble Methods for News Recommender Systems [50.90330146667386]
This paper demonstrates how ensemble methods can be used to combine many diverse state-of-the-art algorithms to achieve superior results on the Microsoft News dataset (MIND)
Our findings demonstrate that a combination of NRS algorithms can outperform individual algorithms, provided that the base learners are sufficiently diverse.
arXiv Detail & Related papers (2024-06-23T13:40:50Z) - A Gold Standard Dataset for the Reviewer Assignment Problem [117.59690218507565]
"Similarity score" is a numerical estimate of the expertise of a reviewer in reviewing a paper.
Our dataset consists of 477 self-reported expertise scores provided by 58 researchers.
For the task of ordering two papers in terms of their relevance for a reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard cases.
arXiv Detail & Related papers (2023-03-23T16:15:03Z) - Large-Scale Sequential Learning for Recommender and Engineering Systems [91.3755431537592]
In this thesis, we focus on the design of an automatic algorithms that provide personalized ranking by adapting to the current conditions.
For the former, we propose novel algorithm called SAROS that take into account both kinds of feedback for learning over the sequence of interactions.
The proposed idea of taking into account the neighbour lines shows statistically significant results in comparison with the initial approach for faults detection in power grid.
arXiv Detail & Related papers (2022-05-13T21:09:41Z) - Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness [92.26039686430204]
We show that even in carefully-designed systems, complementary performance can be elusive.
First, we provide a theoretical framework for modeling simple human-algorithm systems.
Next, we use this model to prove conditions where complementarity is impossible.
arXiv Detail & Related papers (2022-02-17T18:44:41Z) - Beyond Algorithmic Bias: A Socio-Computational Interrogation of the
Google Search by Image Algorithm [0.799536002595393]
We audit the algorithm by presenting it with more than 40 thousands faces of all ages and more than four races.
We find that the algorithm reproduces white male patriarchal structures, often simplifying, stereotyping and discriminating females and non-white individuals.
arXiv Detail & Related papers (2021-05-26T21:40:43Z) - Fair Representation Learning for Heterogeneous Information Networks [35.80367469624887]
We propose a comprehensive set of de-biasing methods for fair HINs representation learning.
We study the behavior of these algorithms, especially their capability in balancing the trade-off between fairness and prediction accuracy.
We evaluate the performance of the proposed methods in an automated career counseling application.
arXiv Detail & Related papers (2021-04-18T08:28:18Z) - A Conceptual Framework for Establishing Trust in Real World Intelligent
Systems [0.0]
Trust in algorithms can be established by letting users interact with the system.
Reflecting features and patterns of human understanding of a domain against algorithmic results can create awareness of such patterns.
Close inspection can be used to decide whether a solution conforms to the expectations or whether it goes beyond the expected.
arXiv Detail & Related papers (2021-04-12T12:58:47Z) - A Duet Recommendation Algorithm Based on Jointly Local and Global
Representation Learning [15.942495330390463]
We propose a knowledge-aware-based recommendation algorithm to capture the local and global representation learning from heterogeneous information.
Based on the method that local and global representations are learned jointly by graph convolutional networks with attention mechanism, the final recommendation probability is calculated by a fully-connected neural network.
arXiv Detail & Related papers (2020-12-03T01:52:14Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.