Interpretable Artificial Intelligence through the Lens of Feature
Interaction
- URL: http://arxiv.org/abs/2103.03103v1
- Date: Mon, 1 Mar 2021 23:23:10 GMT
- Title: Interpretable Artificial Intelligence through the Lens of Feature
Interaction
- Authors: Michael Tsang, James Enouen, Yan Liu
- Abstract summary: This work first explains the historical and modern importance of feature interactions and then surveys the modern interpretability methods which do explicitly consider feature interactions.
This survey aims to bring to light the importance of feature interactions in the larger context of machine learning interpretability.
- Score: 11.217688723644454
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpretation of deep learning models is a very challenging problem because
of their large number of parameters, complex connections between nodes, and
unintelligible feature representations. Despite this, many view
interpretability as a key solution to trustworthiness, fairness, and safety,
especially as deep learning is applied to more critical decision tasks like
credit approval, job screening, and recidivism prediction. There is an
abundance of good research providing interpretability to deep learning models;
however, many of the commonly used methods do not consider a phenomenon called
"feature interaction." This work first explains the historical and modern
importance of feature interactions and then surveys the modern interpretability
methods which do explicitly consider feature interactions. This survey aims to
bring to light the importance of feature interactions in the larger context of
machine learning interpretability, especially in a modern context where deep
learning models heavily rely on feature interactions.
Related papers
- Data Science Principles for Interpretable and Explainable AI [0.7581664835990121]
Interpretable and interactive machine learning aims to make complex models more transparent and controllable.
This review synthesizes key principles from the growing literature in this field.
arXiv Detail & Related papers (2024-05-17T05:32:27Z) - Heterogeneous Contrastive Learning for Foundation Models and Beyond [73.74745053250619]
In the era of big data and Artificial Intelligence, an emerging paradigm is to utilize contrastive self-supervised learning to model large-scale heterogeneous data.
This survey critically evaluates the current landscape of heterogeneous contrastive learning for foundation models.
arXiv Detail & Related papers (2024-03-30T02:55:49Z) - Enhancing HOI Detection with Contextual Cues from Large Vision-Language Models [56.257840490146]
ConCue is a novel approach for improving visual feature extraction in HOI detection.
We develop a transformer-based feature extraction module with a multi-tower architecture that integrates contextual cues into both instance and interaction detectors.
arXiv Detail & Related papers (2023-11-26T09:11:32Z) - DeepSI: Interactive Deep Learning for Semantic Interaction [5.188825486231326]
We propose a framework that integrates deep learning into the human-in-the-loop interactive sensemaking pipeline.
Deep learning extracts meaningful representations from raw data, which improves semantic interaction inference.
Semantic interactions are exploited to fine-tune the deep learning representations, which then improves semantic interaction inference.
arXiv Detail & Related papers (2023-05-26T18:05:57Z) - Causal Triplet: An Open Challenge for Intervention-centric Causal
Representation Learning [98.78136504619539]
Causal Triplet is a causal representation learning benchmark featuring visually more complex scenes.
We show that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts.
arXiv Detail & Related papers (2023-01-12T17:43:38Z) - Relate to Predict: Towards Task-Independent Knowledge Representations
for Reinforcement Learning [11.245432408899092]
Reinforcement Learning can enable agents to learn complex tasks.
It is difficult to interpret the knowledge and reuse it across tasks.
In this paper, we introduce an inductive bias for explicit object-centered knowledge separation.
We show that the degree of explicitness in knowledge separation correlates with faster learning, better accuracy, better generalization, and better interpretability.
arXiv Detail & Related papers (2022-12-10T13:33:56Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Interpreting and improving deep-learning models with reality checks [13.287382944078562]
This chapter covers recent work aiming to interpret models by attributing importance to features and feature groups for a single prediction.
We show how these attributions can be used to directly improve the generalization of a neural network or to distill it into a simple model.
arXiv Detail & Related papers (2021-08-16T00:58:15Z) - Evaluating the Interpretability of Generative Models by Interactive
Reconstruction [30.441247705313575]
We introduce a task to quantify the human-interpretability of generative model representations.
We find performance on this task much more reliably differentiates entangled and disentangled models than baseline approaches.
arXiv Detail & Related papers (2021-02-02T02:38:14Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.