Towards interpretable-by-design deep learning algorithms
- URL: http://arxiv.org/abs/2311.11396v1
- Date: Sun, 19 Nov 2023 18:40:49 GMT
- Title: Towards interpretable-by-design deep learning algorithms
- Authors: Plamen Angelov, Dmitry Kangin, Ziyang Zhang
- Abstract summary: A proposed framework named I recasts the standard supervised classification problem into a function of similarity to a set of prototypes derived from the training data.
We show that one can turn such DL models into conceptually simpler, explainable-through-prototypes ones.
- Score: 11.154826546951414
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The proposed framework named IDEAL (Interpretable-by-design DEep learning
ALgorithms) recasts the standard supervised classification problem into a
function of similarity to a set of prototypes derived from the training data,
while taking advantage of existing latent spaces of large neural networks
forming so-called Foundation Models (FM). This addresses the issue of
explainability (stage B) while retaining the benefits from the tremendous
achievements offered by DL models (e.g., visual transformers, ViT) pre-trained
on huge data sets such as IG-3.6B + ImageNet-1K or LVD-142M (stage A). We show
that one can turn such DL models into conceptually simpler,
explainable-through-prototypes ones.
The key findings can be summarized as follows: (1) the proposed models are
interpretable through prototypes, mitigating the issue of confounded
interpretations, (2) the proposed IDEAL framework circumvents the issue of
catastrophic forgetting allowing efficient class-incremental learning, and (3)
the proposed IDEAL approach demonstrates that ViT architectures narrow the gap
between finetuned and non-finetuned models allowing for transfer learning in a
fraction of time \textbf{without} finetuning of the feature space on a target
dataset with iterative supervised methods.
Related papers
- Idempotent Unsupervised Representation Learning for Skeleton-Based Action Recognition [13.593511876719367]
We propose a novel skeleton-based idempotent generative model (IGM) for unsupervised representation learning.
Our experiments on benchmark datasets, NTU RGB+D and PKUMMD, demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2024-10-27T06:29:04Z) - Sub-graph Based Diffusion Model for Link Prediction [43.15741675617231]
Denoising Diffusion Probabilistic Models (DDPMs) represent a contemporary class of generative models with exceptional qualities.
We build a novel generative model for link prediction using a dedicated design to decompose the likelihood estimation process via the Bayesian formula.
Our proposed method presents numerous advantages: (1) transferability across datasets without retraining, (2) promising generalization on limited training data, and (3) robustness against graph adversarial attacks.
arXiv Detail & Related papers (2024-09-13T02:23:55Z) - High-Performance Few-Shot Segmentation with Foundation Models: An Empirical Study [64.06777376676513]
We develop a few-shot segmentation (FSS) framework based on foundation models.
To be specific, we propose a simple approach to extract implicit knowledge from foundation models to construct coarse correspondence.
Experiments on two widely used datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-10T08:04:11Z) - Learning Transferable Conceptual Prototypes for Interpretable
Unsupervised Domain Adaptation [79.22678026708134]
In this paper, we propose an inherently interpretable method, named Transferable Prototype Learning ( TCPL)
To achieve this goal, we design a hierarchically prototypical module that transfers categorical basic concepts from the source domain to the target domain and learns domain-shared prototypes for explaining the underlying reasoning process.
Comprehensive experiments show that the proposed method can not only provide effective and intuitive explanations but also outperform previous state-of-the-arts.
arXiv Detail & Related papers (2023-10-12T06:36:41Z) - Generalized Face Forgery Detection via Adaptive Learning for Pre-trained Vision Transformer [54.32283739486781]
We present a textbfForgery-aware textbfAdaptive textbfVision textbfTransformer (FA-ViT) under the adaptive learning paradigm.
FA-ViT achieves 93.83% and 78.32% AUC scores on Celeb-DF and DFDC datasets in the cross-dataset evaluation.
arXiv Detail & Related papers (2023-09-20T06:51:11Z) - Unbiased Learning of Deep Generative Models with Structured Discrete
Representations [7.9057320008285945]
We propose novel algorithms for learning structured variational autoencoders (SVAEs)
We are the first to demonstrate the SVAE's ability to handle multimodal uncertainty when data is missing by incorporating discrete latent variables.
Our memory-efficient implicit differentiation scheme makes the SVAE tractable to learn via gradient descent, while demonstrating robustness to incomplete optimization.
arXiv Detail & Related papers (2023-06-14T03:59:21Z) - GSMFlow: Generation Shifts Mitigating Flow for Generalized Zero-Shot
Learning [55.79997930181418]
Generalized Zero-Shot Learning aims to recognize images from both the seen and unseen classes by transferring semantic knowledge from seen to unseen classes.
It is a promising solution to take the advantage of generative models to hallucinate realistic unseen samples based on the knowledge learned from the seen classes.
We propose a novel flow-based generative framework that consists of multiple conditional affine coupling layers for learning unseen data generation.
arXiv Detail & Related papers (2022-07-05T04:04:37Z) - Prototypical Model with Novel Information-theoretic Loss Function for
Generalized Zero Shot Learning [3.870962269034544]
Generalized zero shot learning (GZSL) is still a technical challenge of deep learning.
We address the quantification of the knowledge transfer and semantic relation from an information-theoretic viewpoint.
We propose three information-theoretic loss functions for deterministic GZSL model.
arXiv Detail & Related papers (2021-12-06T16:01:46Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z) - Inducing Semantic Grouping of Latent Concepts for Explanations: An
Ante-Hoc Approach [18.170504027784183]
We show that by exploiting latent and properly modifying different parts of the model can result better explanation as well as provide superior predictive performance.
We also proposed a technique of using two different self-supervision techniques to extract meaningful concepts related to the type of self-supervision considered.
arXiv Detail & Related papers (2021-08-25T07:09:57Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.