An Explainable Model-Agnostic Algorithm for CNN-based Biometrics
Verification
- URL: http://arxiv.org/abs/2307.13428v1
- Date: Tue, 25 Jul 2023 11:51:14 GMT
- Title: An Explainable Model-Agnostic Algorithm for CNN-based Biometrics
Verification
- Authors: Fernando Alonso-Fernandez, Kevin Hernandez-Diaz, Jose M. Buades,
Prayag Tiwari, Josef Bigun
- Abstract summary: This paper describes an adaptation of the Local Interpretable Model-Agnostic Explanations (LIME) AI method to operate under a biometric verification setting.
- Score: 55.28171619580959
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes an adaptation of the Local Interpretable Model-Agnostic
Explanations (LIME) AI method to operate under a biometric verification
setting. LIME was initially proposed for networks with the same output classes
used for training, and it employs the softmax probability to determine which
regions of the image contribute the most to classification. However, in a
verification setting, the classes to be recognized have not been seen during
training. In addition, instead of using the softmax output, face descriptors
are usually obtained from a layer before the classification layer. The model is
adapted to achieve explainability via cosine similarity between feature vectors
of perturbated versions of the input image. The method is showcased for face
biometrics with two CNN models based on MobileNetv2 and ResNet50.
Related papers
- Hybrid diffusion models: combining supervised and generative pretraining for label-efficient fine-tuning of segmentation models [55.2480439325792]
We propose a new pretext task, which is to perform simultaneously image denoising and mask prediction on the first domain.
We show that fine-tuning a model pretrained using this approach leads to better results than fine-tuning a similar model trained using either supervised or unsupervised pretraining.
arXiv Detail & Related papers (2024-08-06T20:19:06Z) - Feature Activation Map: Visual Explanation of Deep Learning Models for
Image Classification [17.373054348176932]
In this work, a post-hoc interpretation tool named feature activation map (FAM) is proposed.
FAM can interpret deep learning models without FC layers as a classifier.
Experiments conducted on ten deep learning models for few-shot image classification, contrastive learning image classification and image retrieval tasks demonstrate the effectiveness of the proposed FAM algorithm.
arXiv Detail & Related papers (2023-07-11T05:33:46Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - A Test Statistic Estimation-based Approach for Establishing
Self-interpretable CNN-based Binary Classifiers [7.424003880270276]
Post-hoc interpretability methods have the limitation that they can produce plausible but different interpretations.
The proposed method is self-interpretable, quantitative. Unlike the traditional post-hoc interpretability methods, the proposed method is self-interpretable, quantitative.
arXiv Detail & Related papers (2023-03-13T05:51:35Z) - Classification of EEG Motor Imagery Using Deep Learning for
Brain-Computer Interface Systems [79.58173794910631]
A trained T1 class Convolutional Neural Network (CNN) model will be used to examine its ability to successfully identify motor imagery.
In theory, and if the model has been trained accurately, it should be able to identify a class and label it accordingly.
The CNN model will then be restored and used to try and identify the same class of motor imagery data using much smaller sampled data.
arXiv Detail & Related papers (2022-05-31T17:09:46Z) - Generalizing Adversarial Explanations with Grad-CAM [7.165984630575092]
We present a novel method that extends Grad-CAM from example-based explanations to a method for explaining global model behaviour.
For our experiment, we study adversarial attacks on deep models such as VGG16, ResNet50, and ResNet101, and wide models such as InceptionNetv3 and XceptionNet.
The proposed method can be used to understand adversarial attacks and explain the behaviour of black box CNN models for image analysis.
arXiv Detail & Related papers (2022-04-11T22:09:21Z) - Explanation-Guided Training for Cross-Domain Few-Shot Classification [96.12873073444091]
Cross-domain few-shot classification task (CD-FSC) combines few-shot classification with the requirement to generalize across domains represented by datasets.
We introduce a novel training approach for existing FSC models.
We show that explanation-guided training effectively improves the model generalization.
arXiv Detail & Related papers (2020-07-17T07:28:08Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - Probabilistic Object Classification using CNN ML-MAP layers [0.0]
We introduce a CNN probabilistic approach based on distributions calculated in the network's Logit layer.
The new approach shows promising performance compared to SoftMax.
arXiv Detail & Related papers (2020-05-29T13:34:15Z) - Self-Learning AI Framework for Skin Lesion Image Segmentation and
Classification [0.0]
To perform medical image segmentation with deep learning models, it requires training on large image dataset with annotation.
To overcome this issue, self-learning annotation scheme was proposed in the two-stage deep learning algorithm.
The classification results of the proposed AI framework achieved training accuracy of 93.8% and testing accuracy of 82.42%.
arXiv Detail & Related papers (2020-01-04T09:31:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.