VISPUR: Visual Aids for Identifying and Interpreting Spurious
Associations in Data-Driven Decisions
- URL: http://arxiv.org/abs/2307.14448v1
- Date: Wed, 26 Jul 2023 18:40:07 GMT
- Title: VISPUR: Visual Aids for Identifying and Interpreting Spurious
Associations in Data-Driven Decisions
- Authors: Xian Teng, Yongsu Ahn, Yu-Ru Lin
- Abstract summary: Simpson's paradox is a phenomenon where aggregated and subgroup-level associations contradict with each other.
Existing tools provide little insights for humans to locate, reason about, and prevent pitfalls of spurious association in practice.
We propose VISPUR, a visual analytic system that provides a causal analysis framework and a human-centric workflow for tackling spurious associations.
- Score: 8.594140167290098
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Big data and machine learning tools have jointly empowered humans in making
data-driven decisions. However, many of them capture empirical associations
that might be spurious due to confounding factors and subgroup heterogeneity.
The famous Simpson's paradox is such a phenomenon where aggregated and
subgroup-level associations contradict with each other, causing cognitive
confusions and difficulty in making adequate interpretations and decisions.
Existing tools provide little insights for humans to locate, reason about, and
prevent pitfalls of spurious association in practice. We propose VISPUR, a
visual analytic system that provides a causal analysis framework and a
human-centric workflow for tackling spurious associations. These include a
CONFOUNDER DASHBOARD, which can automatically identify possible confounding
factors, and a SUBGROUP VIEWER, which allows for the visualization and
comparison of diverse subgroup patterns that likely or potentially result in a
misinterpretation of causality. Additionally, we propose a REASONING
STORYBOARD, which uses a flow-based approach to illustrate paradoxical
phenomena, as well as an interactive DECISION DIAGNOSIS panel that helps ensure
accountable decision-making. Through an expert interview and a controlled user
experiment, our qualitative and quantitative results demonstrate that the
proposed "de-paradox" workflow and the designed visual analytic system are
effective in helping human users to identify and understand spurious
associations, as well as to make accountable causal decisions.
Related papers
- Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - OlaGPT: Empowering LLMs With Human-like Problem-Solving Abilities [19.83434949066066]
This paper introduces a novel intelligent framework, referred to as OlaGPT.
OlaGPT carefully studied a cognitive architecture framework, and propose to simulate certain aspects of human cognition.
The framework involves approximating different cognitive modules, including attention, memory, reasoning, learning, and corresponding scheduling and decision-making mechanisms.
arXiv Detail & Related papers (2023-05-23T09:36:51Z) - Causal Disentangled Variational Auto-Encoder for Preference
Understanding in Recommendation [50.93536377097659]
This paper introduces the Causal Disentangled Variational Auto-Encoder (CaD-VAE), a novel approach for learning causal disentangled representations from interaction data in recommender systems.
The approach utilizes structural causal models to generate causal representations that describe the causal relationship between latent factors.
arXiv Detail & Related papers (2023-04-17T00:10:56Z) - ESCAPE: Countering Systematic Errors from Machine's Blind Spots via
Interactive Visual Analysis [13.97436974677563]
We propose ESCAPE, a visual analytic system that promotes a human-in-the-loop workflow for countering systematic errors.
By allowing human users to easily inspect spurious associations, the system facilitates users to spontaneously recognize concepts associated misclassifications.
We also propose two statistical approaches, relative concept association to better quantify the associations between a concept and instances, and debias method to mitigate spurious associations.
arXiv Detail & Related papers (2023-03-16T21:29:50Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - VAC2: Visual Analysis of Combined Causality in Event Sequences [6.145427901944597]
We develop a combined causality visual analysis system to help users explore combined causes as well as an individual cause.
This interactive system supports multi-level causality exploration with diverse ordering strategies and a focus and context technique.
The usefulness and effectiveness of the system are further evaluated by conducting a pilot user study and two case studies on event sequence data.
arXiv Detail & Related papers (2022-06-11T04:53:23Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Visual Causality Analysis of Event Sequence Data [32.74361488457415]
We introduce a visual analytics method for recovering causalities in event sequence data.
We extend the Granger causality analysis algorithm on Hawkes processes to incorporate user feedback into causal model refinement.
arXiv Detail & Related papers (2020-09-01T04:28:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.