Attribute Inference Attack of Speech Emotion Recognition in Federated
Learning Settings
- URL: http://arxiv.org/abs/2112.13416v1
- Date: Sun, 26 Dec 2021 16:50:42 GMT
- Title: Attribute Inference Attack of Speech Emotion Recognition in Federated
Learning Settings
- Authors: Tiantian Feng and Hanieh Hashemi and Rajat Hebbar and Murali Annavaram
and Shrikanth S. Narayanan
- Abstract summary: Federated learning (FL) is a distributed machine learning paradigm that coordinates clients to train a model collaboratively without sharing local data.
We propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or model parameters.
We show that the attribute inference attack is achievable for SER systems trained using FL.
- Score: 56.93025161787725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Speech emotion recognition (SER) processes speech signals to detect and
characterize expressed perceived emotions. Many SER application systems often
acquire and transmit speech data collected at the client-side to remote cloud
platforms for inference and decision making. However, speech data carry rich
information not only about emotions conveyed in vocal expressions, but also
other sensitive demographic traits such as gender, age and language background.
Consequently, it is desirable for SER systems to have the ability to classify
emotion constructs while preventing unintended/improper inferences of sensitive
and demographic information. Federated learning (FL) is a distributed machine
learning paradigm that coordinates clients to train a model collaboratively
without sharing their local data. This training approach appears secure and can
improve privacy for SER. However, recent works have demonstrated that FL
approaches are still vulnerable to various privacy attacks like reconstruction
attacks and membership inference attacks. Although most of these have focused
on computer vision applications, such information leakages exist in the SER
systems trained using the FL technique. To assess the information leakage of
SER systems trained using FL, we propose an attribute inference attack
framework that infers sensitive attribute information of the clients from
shared gradients or model parameters, corresponding to the FedSGD and the
FedAvg training algorithms, respectively. As a use case, we empirically
evaluate our approach for predicting the client's gender information using
three SER benchmark datasets: IEMOCAP, CREMA-D, and MSP-Improv. We show that
the attribute inference attack is achievable for SER systems trained using FL.
We further identify that most information leakage possibly comes from the first
layer in the SER model.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Unveiling Hidden Factors: Explainable AI for Feature Boosting in Speech Emotion Recognition [17.568724398229232]
Speech emotion recognition (SER) has gained significant attention due to its several application fields, such as mental health, education, and human-computer interaction.
This study proposes an iterative feature boosting approach for SER that emphasizes feature relevance and explainability to enhance machine learning model performance.
The effectiveness of the proposed method is validated on the SER benchmarks of the Toronto emotional speech set (TESS), Berlin Database of Emotional Speech (EMO-DB), Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), and Surrey Audio-Visual Expressed Emotion (SAVEE) datasets.
arXiv Detail & Related papers (2024-06-01T00:39:55Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - Semi-FedSER: Semi-supervised Learning for Speech Emotion Recognition On
Federated Learning using Multiview Pseudo-Labeling [43.17379040854574]
Speech Emotion Recognition (SER) application is frequently associated with privacy concerns.
Federated learning (FL) is a distributed machine learning algorithm that coordinates clients to train a model collaboratively without sharing local data.
In this work, we propose a semi-supervised federated learning framework, Semi-FedSER, that utilizes both labeled and unlabeled data samples to address the challenge of limited data samples in FL.
arXiv Detail & Related papers (2022-03-15T21:50:43Z) - Privacy-preserving Speech Emotion Recognition through Semi-Supervised
Federated Learning [0.8508198765617195]
Speech Emotion Recognition (SER) refers to the recognition of human emotions from natural speech.
Existing SER approaches are largely centralized, without considering users' privacy.
We present a privacy-preserving and data-efficient SER approach by utilizing the concept of Federated Learning.
arXiv Detail & Related papers (2022-02-05T18:30:23Z) - An Attribute-Aligned Strategy for Learning Speech Representation [57.891727280493015]
We propose an attribute-aligned learning strategy to derive speech representation that can flexibly address these issues by attribute-selection mechanism.
Specifically, we propose a layered-representation variational autoencoder (LR-VAE), which factorizes speech representation into attribute-sensitive nodes.
Our proposed method achieves competitive performances on identity-free SER and a better performance on emotionless SV.
arXiv Detail & Related papers (2021-06-05T06:19:14Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - A Framework for Evaluating Gradient Leakage Attacks in Federated
Learning [14.134217287912008]
Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients.
Recent studies have shown that even sharing local parameter updates from a client to the federated server may be susceptible to gradient leakage attacks.
We present a principled framework for evaluating and comparing different forms of client privacy leakage attacks.
arXiv Detail & Related papers (2020-04-22T05:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.