BiaScope: Visual Unfairness Diagnosis for Graph Embeddings
- URL: http://arxiv.org/abs/2210.06417v1
- Date: Wed, 12 Oct 2022 17:12:19 GMT
- Title: BiaScope: Visual Unfairness Diagnosis for Graph Embeddings
- Authors: Agapi Rissaki, Bruno Scarone, David Liu, Aditeya Pandey, Brennan
Klein, Tina Eliassi-Rad, Michelle A. Borkin
- Abstract summary: We present BiaScope, an interactive visualization tool that supports end-to-end visual unfairness diagnosis for graph embeddings.
It allows the user to (i.e. visually compare two embeddings with respect to fairness, (ii) locate nodes or graph communities that are unfairly embedded, and (iii) understand the source of bias by interactively linking the relevant embedding subspace with the corresponding graph topology.
- Score: 8.442750346008431
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The issue of bias (i.e., systematic unfairness) in machine learning models
has recently attracted the attention of both researchers and practitioners. For
the graph mining community in particular, an important goal toward algorithmic
fairness is to detect and mitigate bias incorporated into graph embeddings
since they are commonly used in human-centered applications, e.g., social-media
recommendations. However, simple analytical methods for detecting bias
typically involve aggregate statistics which do not reveal the sources of
unfairness. Instead, visual methods can provide a holistic fairness
characterization of graph embeddings and help uncover the causes of observed
bias. In this work, we present BiaScope, an interactive visualization tool that
supports end-to-end visual unfairness diagnosis for graph embeddings. The tool
is the product of a design study in collaboration with domain experts. It
allows the user to (i) visually compare two embeddings with respect to
fairness, (ii) locate nodes or graph communities that are unfairly embedded,
and (iii) understand the source of bias by interactively linking the relevant
embedding subspace with the corresponding graph topology. Experts' feedback
confirms that our tool is effective at detecting and diagnosing unfairness.
Thus, we envision our tool both as a companion for researchers in designing
their algorithms as well as a guide for practitioners who use off-the-shelf
graph embeddings.
Related papers
- FairWire: Fair Graph Generation [18.6649050946022]
This work focuses on the analysis and mitigation of structural bias for both real and synthetic graphs.
To alleviate the identified bias factors, we design a novel fairness regularizer that offers a versatile use.
We propose a fair graph generation framework, FairWire, by leveraging our fair regularizer design in a generative model.
arXiv Detail & Related papers (2024-02-06T20:43:00Z) - Deceptive Fairness Attacks on Graphs via Meta Learning [102.53029537886314]
We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
arXiv Detail & Related papers (2023-10-24T09:10:14Z) - A Survey of Imbalanced Learning on Graphs: Problems, Techniques, and
Future Directions [64.84521350148513]
Graphs represent interconnected structures prevalent in a myriad of real-world scenarios.
Effective graph analytics, such as graph learning methods, enables users to gain profound insights from graph data.
However, these methods often suffer from data imbalance, a common issue in graph data where certain segments possess abundant data while others are scarce.
This necessitates the emerging field of imbalanced learning on graphs, which aims to correct these data distribution skews for more accurate and representative learning outcomes.
arXiv Detail & Related papers (2023-08-26T09:11:44Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - A Survey on Fairness for Machine Learning on Graphs [2.3326951882644553]
This survey is the first one dedicated to fairness for relational data.
It aims to present a comprehensive review of state-of-the-art techniques in fairness on graph mining.
arXiv Detail & Related papers (2022-05-11T10:40:56Z) - Graph-in-Graph (GiG): Learning interpretable latent graphs in
non-Euclidean domain for biological and healthcare applications [52.65389473899139]
Graphs are a powerful tool for representing and analyzing unstructured, non-Euclidean data ubiquitous in the healthcare domain.
Recent works have shown that considering relationships between input data samples have a positive regularizing effect for the downstream task.
We propose Graph-in-Graph (GiG), a neural network architecture for protein classification and brain imaging applications.
arXiv Detail & Related papers (2022-04-01T10:01:37Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Biased Edge Dropout for Enhancing Fairness in Graph Representation
Learning [14.664485680918725]
We propose a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning.
FairDrop can be plugged in easily on many existing algorithms, is efficient, adaptable, and can be combined with other fairness-inducing solutions.
We prove that the proposed algorithm can successfully improve the fairness of all models up to a small or negligible drop in accuracy.
arXiv Detail & Related papers (2021-04-29T08:59:36Z) - A Survey of Adversarial Learning on Graphs [59.21341359399431]
We investigate and summarize the existing works on graph adversarial learning tasks.
Specifically, we survey and unify the existing works w.r.t. attack and defense in graph analysis tasks.
We emphasize the importance of related evaluation metrics, investigate and summarize them comprehensively.
arXiv Detail & Related papers (2020-03-10T12:48:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.