Towards Privacy-Preserving Affect Recognition: A Two-Level Deep Learning
Architecture
- URL: http://arxiv.org/abs/2111.07344v1
- Date: Sun, 14 Nov 2021 13:52:57 GMT
- Title: Towards Privacy-Preserving Affect Recognition: A Two-Level Deep Learning
Architecture
- Authors: Jimiama M. Mase, Natalie Leesakul, Fan Yang, Grazziela P. Figueredo,
Mercedes Torres Torres
- Abstract summary: We propose a two-level deep learning architecture for affect recognition.
The architecture consists of recurrent neural networks to capture the temporal relationships amongst the features.
- Score: 2.9392867898439006
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automatically understanding and recognising human affective states using
images and computer vision can improve human-computer and human-robot
interaction. However, privacy has become an issue of great concern, as the
identities of people used to train affective models can be exposed in the
process. For instance, malicious individuals could exploit images from users
and assume their identities. In addition, affect recognition using images can
lead to discriminatory and algorithmic bias, as certain information such as
race, gender, and age could be assumed based on facial features. Possible
solutions to protect the privacy of users and avoid misuse of their identities
are to: (1) extract anonymised facial features, namely action units (AU) from a
database of images, discard the images and use AUs for processing and training,
and (2) federated learning (FL) i.e. process raw images in users' local
machines (local processing) and send the locally trained models to the main
processing machine for aggregation (central processing). In this paper, we
propose a two-level deep learning architecture for affect recognition that uses
AUs in level 1 and FL in level 2 to protect users' identities. The architecture
consists of recurrent neural networks to capture the temporal relationships
amongst the features and predict valence and arousal affective states. In our
experiments, we evaluate the performance of our privacy-preserving architecture
using different variations of recurrent neural networks on RECOLA, a
comprehensive multimodal affective database. Our results show state-of-the-art
performance of $0.426$ for valence and $0.401$ for arousal using the
Concordance Correlation Coefficient evaluation metric, demonstrating the
feasibility of developing models for affect recognition that are both accurate
and ensure privacy.
Related papers
- CLIP Unreasonable Potential in Single-Shot Face Recognition [0.0]
Face recognition is a core task in computer vision designed to identify and authenticate individuals by analyzing facial patterns and features.
Recent Contrastive Language Image Pretraining (CLIP) a model developed by OpenAI has shown promising advancements.
CLIP links natural language processing with vision tasks allowing it to generalize across modalities.
arXiv Detail & Related papers (2024-11-19T08:23:52Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Introducing Representations of Facial Affect in Automated Multimodal
Deception Detection [18.16596562087374]
Automated deception detection systems can enhance health, justice, and security in society.
This paper presents a novel analysis of the power of dimensional representations of facial affect for automated deception detection.
We used a video dataset of people communicating truthfully or deceptively in real-world, high-stakes courtroom situations.
arXiv Detail & Related papers (2020-08-31T05:12:57Z) - CIAGAN: Conditional Identity Anonymization Generative Adversarial
Networks [12.20367903755194]
CIAGAN is a model for image and video anonymization based on conditional generative adversarial networks.
Our model is able to remove the identifying characteristics of faces and bodies while producing high-quality images and videos.
arXiv Detail & Related papers (2020-05-19T15:56:08Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z) - An adversarial learning framework for preserving users' anonymity in
face-based emotion recognition [6.9581841997309475]
This paper proposes an adversarial learning framework which relies on a convolutional neural network (CNN) architecture trained through an iterative procedure.
Results indicate that the proposed approach can learn a convolutional transformation for preserving emotion recognition accuracy and degrading face identity recognition.
arXiv Detail & Related papers (2020-01-16T22:45:52Z) - Investigating the Impact of Inclusion in Face Recognition Training Data
on Individual Face Identification [93.5538147928669]
We audit ArcFace, a state-of-the-art, open source face recognition system, in a large-scale face identification experiment with more than one million distractor images.
We find a Rank-1 face identification accuracy of 79.71% for individuals present in the model's training data and an accuracy of 75.73% for those not present.
arXiv Detail & Related papers (2020-01-09T15:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.