Is my Data in your AI Model? Membership Inference Test with Application
to Face Images
- URL: http://arxiv.org/abs/2402.09225v1
- Date: Wed, 14 Feb 2024 15:09:01 GMT
- Title: Is my Data in your AI Model? Membership Inference Test with Application
to Face Images
- Authors: Daniel DeAlcala, Aythami Morales, Gonzalo Mancera, Julian Fierrez,
Ruben Tolosana, Javier Ortega-Garcia
- Abstract summary: The Membership Inference Test (MINT) aims to empirically assess if specific data was used during the training of Artificial Intelligence (AI) models.
We propose two novel MINT architectures designed to learn the distinct activation patterns that emerge when an audited model is exposed to data used during its training process.
The proposed MINT architectures are evaluated on a challenging face recognition task, considering three state-of-the-art face recognition models.
- Score: 19.49970318531581
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper introduces the Membership Inference Test (MINT), a novel approach
that aims to empirically assess if specific data was used during the training
of Artificial Intelligence (AI) models. Specifically, we propose two novel MINT
architectures designed to learn the distinct activation patterns that emerge
when an audited model is exposed to data used during its training process. The
first architecture is based on a Multilayer Perceptron (MLP) network and the
second one is based on Convolutional Neural Networks (CNNs). The proposed MINT
architectures are evaluated on a challenging face recognition task, considering
three state-of-the-art face recognition models. Experiments are carried out
using six publicly available databases, comprising over 22 million face images
in total. Also, different experimental scenarios are considered depending on
the context available of the AI model to test. Promising results, up to 90%
accuracy, are achieved using our proposed MINT approach, suggesting that it is
possible to recognize if an AI model has been trained with specific data.
Related papers
- Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - SDFR: Synthetic Data for Face Recognition Competition [51.9134406629509]
Large-scale face recognition datasets are collected by crawling the Internet and without individuals' consent, raising legal, ethical, and privacy concerns.
Recently several works proposed generating synthetic face recognition datasets to mitigate concerns in web-crawled face recognition datasets.
This paper presents the summary of the Synthetic Data for Face Recognition (SDFR) Competition held in conjunction with the 18th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2024)
The SDFR competition was split into two tasks, allowing participants to train face recognition systems using new synthetic datasets and/or existing ones.
arXiv Detail & Related papers (2024-04-06T10:30:31Z) - Data Augmentation and Transfer Learning Approaches Applied to Facial
Expressions Recognition [0.3481985817302898]
We propose a novel data augmentation technique that improves the performances in the recognition task.
We build from scratch GAN models able to generate new synthetic images for each emotion type.
On the augmented datasets we fine tune pretrained convolutional neural networks with different architectures.
arXiv Detail & Related papers (2024-02-15T14:46:03Z) - MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - A Robust Framework for Deep Learning Approaches to Facial Emotion
Recognition and Evaluation [0.17398560678845074]
We propose a framework in which models developed for FER can be compared and contrasted against one another.
A lightweight convolutional neural network is trained on the AffectNet dataset.
A web application is developed and deployed with our proposed framework as a proof of concept.
arXiv Detail & Related papers (2022-01-30T02:10:01Z) - An Approach for Combining Multimodal Fusion and Neural Architecture
Search Applied to Knowledge Tracing [6.540879944736641]
We propose a sequential model based optimization approach that combines multimodal fusion and neural architecture search within one framework.
We evaluate our methods on two public real datasets showing the discovered model is able to achieve superior performance.
arXiv Detail & Related papers (2021-11-08T13:43:46Z) - Multi-Branch Deep Radial Basis Function Networks for Facial Emotion
Recognition [80.35852245488043]
We propose a CNN based architecture enhanced with multiple branches formed by radial basis function (RBF) units.
RBF units capture local patterns shared by similar instances using an intermediate representation.
We show it is the incorporation of local information what makes the proposed model competitive.
arXiv Detail & Related papers (2021-09-07T21:05:56Z) - Application of Facial Recognition using Convolutional Neural Networks
for Entry Access Control [0.0]
The paper focuses on solving the supervised classification problem of taking images of people as input and classifying the person in the image as one of the authors or not.
Two approaches are proposed: (1) building and training a neural network called WoodNet from scratch and (2) leveraging transfer learning by utilizing a network pre-trained on the ImageNet database.
The results are two models classifying the individuals in the dataset with high accuracy, achieving over 99% accuracy on held-out test data.
arXiv Detail & Related papers (2020-11-23T07:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.