Towards Automatic Bias Detection in Knowledge Graphs
- URL: http://arxiv.org/abs/2109.10697v1
- Date: Sun, 19 Sep 2021 03:58:25 GMT
- Title: Towards Automatic Bias Detection in Knowledge Graphs
- Authors: Daphna Keidar, Mian Zhong, Ce Zhang, Yash Raj Shrestha, Bibek Paudel
- Abstract summary: We describe a framework for identifying biases in knowledge graph embeddings, based on numerical bias metrics.
We illustrate the framework with three different bias measures on the task of profession prediction.
The relations flagged as biased can then be handed to decision makers for judgement upon subsequent debiasing.
- Score: 5.402498799294428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the recent surge in social applications relying on knowledge graphs, the
need for techniques to ensure fairness in KG based methods is becoming
increasingly evident. Previous works have demonstrated that KGs are prone to
various social biases, and have proposed multiple methods for debiasing them.
However, in such studies, the focus has been on debiasing techniques, while the
relations to be debiased are specified manually by the user. As manual
specification is itself susceptible to human cognitive bias, there is a need
for a system capable of quantifying and exposing biases, that can support more
informed decisions on what to debias. To address this gap in the literature, we
describe a framework for identifying biases present in knowledge graph
embeddings, based on numerical bias metrics. We illustrate the framework with
three different bias measures on the task of profession prediction, and it can
be flexibly extended to further bias definitions and applications. The
relations flagged as biased can then be handed to decision makers for judgement
upon subsequent debiasing.
Related papers
- Measuring and Addressing Indexical Bias in Information Retrieval [69.7897730778898]
PAIR framework supports automatic bias audits for ranked documents or entire IR systems.
After introducing DUO, we run an extensive evaluation of 8 IR systems on a new corpus of 32k synthetic and 4.7k natural documents.
A human behavioral study validates our approach, showing that our bias metric can help predict when and how indexical bias will shift a reader's opinion.
arXiv Detail & Related papers (2024-06-06T17:42:37Z) - Language-guided Detection and Mitigation of Unknown Dataset Bias [23.299264313976213]
We propose a framework to identify potential biases as keywords without prior knowledge based on the partial occurrence in the captions.
Our framework not only outperforms existing methods without prior knowledge, but also is even comparable with a method that assumes prior knowledge.
arXiv Detail & Related papers (2024-06-05T03:11:33Z) - A Principled Approach for a New Bias Measure [7.352247786388098]
We propose the definition of Uniform Bias (UB), the first bias measure with a clear and simple interpretation in the full range of bias values.
Our results are experimentally validated using nine publicly available datasets and theoretically analyzed, which provide novel insights about the problem.
Based on our approach, we also design a bias mitigation model that might be useful to policymakers.
arXiv Detail & Related papers (2024-05-20T18:14:33Z) - Diversity matters: Robustness of bias measurements in Wikidata [4.950095974653716]
We reveal data biases that surface in Wikidata for thirteen different demographics selected from seven continents.
We conduct our extensive experiments on a large number of occupations sampled from the thirteen demographics with respect to the sensitive attribute, i.e., gender.
We show that the choice of the state-of-the-art KG embedding algorithm has a strong impact on the ranking of biased occupations irrespective of gender.
arXiv Detail & Related papers (2023-02-27T18:38:10Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Unsupervised Learning of Unbiased Visual Representations [10.871587311621974]
Deep neural networks are known for their inability to learn robust representations when biases exist in the dataset.
We propose a fully unsupervised debiasing framework, consisting of three steps.
We employ state-of-the-art supervised debiasing techniques to obtain an unbiased model.
arXiv Detail & Related papers (2022-04-26T10:51:50Z) - The SAME score: Improved cosine based bias score for word embeddings [49.75878234192369]
We introduce SAME, a novel bias score for semantic bias in embeddings.
We show that SAME is capable of measuring semantic bias and identify potential causes for social bias in downstream tasks.
arXiv Detail & Related papers (2022-03-28T09:28:13Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Towards Measuring Bias in Image Classification [61.802949761385]
Convolutional Neural Networks (CNN) have become state-of-the-art for the main computer vision tasks.
However, due to the complex structure their decisions are hard to understand which limits their use in some context of the industrial world.
We present a systematic approach to uncover data bias by means of attribution maps.
arXiv Detail & Related papers (2021-07-01T10:50:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.