CLIPC8: Face liveness detection algorithm based on image-text pairs and
contrastive learning
- URL: http://arxiv.org/abs/2311.17583v1
- Date: Wed, 29 Nov 2023 12:21:42 GMT
- Title: CLIPC8: Face liveness detection algorithm based on image-text pairs and
contrastive learning
- Authors: Xu Liu, Shu Zhou, Yurong Song, Wenzhe Luo, Xin Zhang
- Abstract summary: We propose a face liveness detection method based on image-text pairs and contrastive learning.
The proposed method is capable of effectively detecting specific liveness attack behaviors in certain scenarios.
It is also effective in detecting traditional liveness attack methods, such as printing photo attacks and screen remake attacks.
- Score: 3.90443799528247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition technology is widely used in the financial field, and
various types of liveness attack behaviors need to be addressed. Existing
liveness detection algorithms are trained on specific training datasets and
tested on testing datasets, but their performance and robustness in
transferring to unseen datasets are relatively poor. To tackle this issue, we
propose a face liveness detection method based on image-text pairs and
contrastive learning, dividing liveness attack problems in the financial field
into eight categories and using text information to describe the images of
these eight types of attacks. The text encoder and image encoder are used to
extract feature vector representations for the classification description text
and face images, respectively. By maximizing the similarity of positive samples
and minimizing the similarity of negative samples, the model learns shared
representations between images and texts. The proposed method is capable of
effectively detecting specific liveness attack behaviors in certain scenarios,
such as those occurring in dark environments or involving the tampering of ID
card photos. Additionally, it is also effective in detecting traditional
liveness attack methods, such as printing photo attacks and screen remake
attacks. The zero-shot capabilities of face liveness detection on five public
datasets, including NUAA, CASIA-FASD, Replay-Attack, OULU-NPU and MSU-MFSD also
reaches the level of commercial algorithms. The detection capability of
proposed algorithm was verified on 5 types of testing datasets, and the results
show that the method outperformed commercial algorithms, and the detection
rates reached 100% on multiple datasets. Demonstrating the effectiveness and
robustness of introducing image-text pairs and contrastive learning into
liveness detection tasks as proposed in this paper.
Related papers
- Semantic Contextualization of Face Forgery: A New Definition, Dataset, and Detection Method [77.65459419417533]
We put face forgery in a semantic context and define that computational methods that alter semantic face attributes are sources of face forgery.
We construct a large face forgery image dataset, where each image is associated with a set of labels organized in a hierarchical graph.
We propose a semantics-oriented face forgery detection method that captures label relations and prioritizes the primary task.
arXiv Detail & Related papers (2024-05-14T10:24:19Z) - Visual Context-Aware Person Fall Detection [52.49277799455569]
We present a segmentation pipeline to semi-automatically separate individuals and objects in images.
Background objects such as beds, chairs, or wheelchairs can challenge fall detection systems, leading to false positive alarms.
We demonstrate that object-specific contextual transformations during training effectively mitigate this challenge.
arXiv Detail & Related papers (2024-04-11T19:06:36Z) - Counterfactual Image Generation for adversarially robust and
interpretable Classifiers [1.3859669037499769]
We propose a unified framework leveraging image-to-image translation Generative Adrial Networks (GANs) to produce counterfactual samples.
This is achieved by combining the classifier and discriminator into a single model that attributes real images to their respective classes and flags generated images as "fake"
We show how the model exhibits improved robustness to adversarial attacks, and we show how the discriminator's "fakeness" value serves as an uncertainty measure of the predictions.
arXiv Detail & Related papers (2023-10-01T18:50:29Z) - Building an Invisible Shield for Your Portrait against Deepfakes [34.65356811439098]
We propose a novel framework - Integrity Encryptor, aiming to protect portraits in a proactive strategy.
Our methodology involves covertly encoding messages that are closely associated with key facial attributes into authentic images.
The modified facial attributes serve as a mean of detecting manipulated images through a comparison of the decoded messages.
arXiv Detail & Related papers (2023-05-22T10:01:28Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - A Study for Universal Adversarial Attacks on Texture Recognition [19.79803434998116]
We show that there exist small image-agnostic/univesal perturbations that can fool the deep learning models with more than 80% of testing fooling rates on all tested texture datasets.
The computed perturbations using various attack methods on the tested datasets are generally quasi-imperceptible, containing structured patterns with low, middle and high frequency components.
arXiv Detail & Related papers (2020-10-04T08:11:11Z) - Determining Sequence of Image Processing Technique (IPT) to Detect
Adversarial Attacks [4.431353523758957]
We propose an evolutionary approach to automatically determine Image Processing Techniques Sequence (IPTS) for detecting malicious inputs.
A detection framework based on a genetic algorithm (GA) is developed to find the optimal IPTS.
A set of IPTS selected dynamically in testing time which works as a filter for the adversarial attack.
arXiv Detail & Related papers (2020-07-01T08:59:14Z) - Face Anti-Spoofing by Learning Polarization Cues in a Real-World
Scenario [50.36920272392624]
Face anti-spoofing is the key to preventing security breaches in biometric recognition applications.
Deep learning method using RGB and infrared images demands a large amount of training data for new attacks.
We present a face anti-spoofing method in a real-world scenario by automatic learning the physical characteristics in polarization images of a real face.
arXiv Detail & Related papers (2020-03-18T03:04:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.