Explaining Face Presentation Attack Detection Using Natural Language
- URL: http://arxiv.org/abs/2111.04862v1
- Date: Mon, 8 Nov 2021 22:55:55 GMT
- Title: Explaining Face Presentation Attack Detection Using Natural Language
- Authors: Hengameh Mirzaalian, Mohamed E. Hussein, Leonidas Spinoulas, Jonathan
May, Wael Abd-Almageed
- Abstract summary: In this paper, we tackle the problem of explaining face presentation attack predictions through natural language.
Our approach passes feature representations of a deep layer of the PAD model to a language model to generate text describing the reasoning behind the PAD prediction.
We investigate how the quality of the generated explanations is affected by different loss functions, including the commonly used word-wise cross entropy loss, a sentence discriminative loss, and a sentence semantic loss.
- Score: 24.265611015740287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A large number of deep neural network based techniques have been developed to
address the challenging problem of face presentation attack detection (PAD).
Whereas such techniques' focus has been on improving PAD performance in terms
of classification accuracy and robustness against unseen attacks and
environmental conditions, there exists little attention on the explainability
of PAD predictions. In this paper, we tackle the problem of explaining PAD
predictions through natural language. Our approach passes feature
representations of a deep layer of the PAD model to a language model to
generate text describing the reasoning behind the PAD prediction. Due to the
limited amount of annotated data in our study, we apply a light-weight LSTM
network as our natural language generation model. We investigate how the
quality of the generated explanations is affected by different loss functions,
including the commonly used word-wise cross entropy loss, a sentence
discriminative loss, and a sentence semantic loss. We perform our experiments
using face images from a dataset consisting of 1,105 bona-fide and 924
presentation attack samples. Our quantitative and qualitative results show the
effectiveness of our model for generating proper PAD explanations through text
as well as the power of the sentence-wise losses. To the best of our knowledge,
this is the first introduction of a joint biometrics-NLP task. Our dataset can
be obtained through our GitHub page.
Related papers
- Enhancing adversarial robustness in Natural Language Inference using explanations [41.46494686136601]
We cast the spotlight on the underexplored task of Natural Language Inference (NLI)
We validate the usage of natural language explanation as a model-agnostic defence strategy through extensive experimentation.
We research the correlation of widely used language generation metrics with human perception, in order for them to serve as a proxy towards robust NLI models.
arXiv Detail & Related papers (2024-09-11T17:09:49Z) - DPP-Based Adversarial Prompt Searching for Lanugage Models [56.73828162194457]
Auto-regressive Selective Replacement Ascent (ASRA) is a discrete optimization algorithm that selects prompts based on both quality and similarity with determinantal point process (DPP)
Experimental results on six different pre-trained language models demonstrate the efficacy of ASRA for eliciting toxic content.
arXiv Detail & Related papers (2024-03-01T05:28:06Z) - SA-Attack: Improving Adversarial Transferability of Vision-Language
Pre-training Models via Self-Augmentation [56.622250514119294]
In contrast to white-box adversarial attacks, transfer attacks are more reflective of real-world scenarios.
We propose a self-augment-based transfer attack method, termed SA-Attack.
arXiv Detail & Related papers (2023-12-08T09:08:50Z) - Naturalistic Causal Probing for Morpho-Syntax [76.83735391276547]
We suggest a naturalistic strategy for input-level intervention on real world data in Spanish.
Using our approach, we isolate morpho-syntactic features from counfounders in sentences.
We apply this methodology to analyze causal effects of gender and number on contextualized representations extracted from pre-trained models.
arXiv Detail & Related papers (2022-05-14T11:47:58Z) - Detecting Textual Adversarial Examples Based on Distributional
Characteristics of Data Representations [11.93653349589025]
adversarial examples are constructed by adding small non-random perturbations to correctly classified inputs.
Approaches to adversarial attacks in natural language tasks have boomed in the last five years using character-level, word-level, or phrase-level perturbations.
We propose two new reactive methods for NLP to fill this gap.
Adapted LID and MDRE obtain state-of-the-art results on character-level, word-level, and phrase-level attacks on the IMDB dataset.
arXiv Detail & Related papers (2022-04-29T02:32:02Z) - GPT-D: Inducing Dementia-related Linguistic Anomalies by Deliberate
Degradation of Artificial Neural Language Models [7.8430387435520625]
We propose a novel method by which a Transformer DL model (GPT-2) pre-trained on general English text is paired with an artificially degraded version of itself (GPT-D)
This technique approaches state-of-the-art performance on text data from a widely used "Cookie Theft" picture description task, and unlike established alternatives also generalizes well to spontaneous conversations.
Our study is a step toward better understanding of the relationships between the inner workings of generative neural language models, the language that they produce, and the deleterious effects of dementia on human speech and language characteristics.
arXiv Detail & Related papers (2022-03-25T00:25:42Z) - ProsoSpeech: Enhancing Prosody With Quantized Vector Pre-training in
Text-to-Speech [96.0009517132463]
We introduce a word-level prosody encoder, which quantizes the low-frequency band of the speech and compresses prosody attributes in the latent prosody vector (LPV)
We then introduce an LPV predictor, which predicts LPV given word sequence and fine-tune it on the high-quality TTS dataset.
Experimental results show that ProsoSpeech can generate speech with richer prosody compared with baseline methods.
arXiv Detail & Related papers (2022-02-16T01:42:32Z) - Phrase-level Adversarial Example Generation for Neural Machine
Translation [75.01476479100569]
We propose a phrase-level adversarial example generation (PAEG) method to enhance the robustness of the model.
We verify our method on three benchmarks, including LDC Chinese-English, IWSLT14 German-English, and WMT14 English-German tasks.
arXiv Detail & Related papers (2022-01-06T11:00:49Z) - Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial
Attack Framework [17.17479625646699]
We propose a unified framework to craft textual adversarial samples.
In this paper, we instantiate our framework with an attack algorithm named Textual Projected Gradient Descent (T-PGD)
arXiv Detail & Related papers (2021-10-28T17:31:51Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.