A GAN-based Approach for Mitigating Inference Attacks in Smart Home
Environment
- URL: http://arxiv.org/abs/2011.06725v1
- Date: Fri, 13 Nov 2020 02:14:32 GMT
- Title: A GAN-based Approach for Mitigating Inference Attacks in Smart Home
Environment
- Authors: Olakunle Ibitoye, Ashraf Matrawy, and M. Omair Shafiq
- Abstract summary: In this study, we explore the problem of adversaries spying on smart home users to infer sensitive information with the aid of machine learning techniques.
We propose a Generative Adrial Network (GAN) based approach for privacy preservation in smart homes.
- Score: 3.785123406103385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of smart, connected, always listening devices have
introduced significant privacy risks to users in a smart home environment.
Beyond the notable risk of eavesdropping, intruders can adopt machine learning
techniques to infer sensitive information from audio recordings on these
devices, resulting in a new dimension of privacy concerns and attack variables
to smart home users. Techniques such as sound masking and microphone jamming
have been effectively used to prevent eavesdroppers from listening in to
private conversations. In this study, we explore the problem of adversaries
spying on smart home users to infer sensitive information with the aid of
machine learning techniques. We then analyze the role of randomness in the
effectiveness of sound masking for mitigating sensitive information leakage. We
propose a Generative Adversarial Network (GAN) based approach for privacy
preservation in smart homes which generates random noise to distort the
unwanted machine learning-based inference. Our experimental results demonstrate
that GANs can be used to generate more effective sound masking noise signals
which exhibit more randomness and effectively mitigate deep learning-based
inference attacks while preserving the semantics of the audio samples.
Related papers
- Safeguarding Voice Privacy: Harnessing Near-Ultrasonic Interference To Protect Against Unauthorized Audio Recording [0.0]
This paper investigates the susceptibility of automatic speech recognition (ASR) algorithms to interference from near-ultrasonic noise.
We expose a critical vulnerability in the most common microphones used in modern voice-activated devices, which inadvertently demodulate near-ultrasonic frequencies into the audible spectrum.
Our findings highlight the need to develop robust countermeasures to protect voice-activated systems from malicious exploitation of this vulnerability.
arXiv Detail & Related papers (2024-04-07T00:49:19Z) - Shielding the Unseen: Privacy Protection through Poisoning NeRF with
Spatial Deformation [59.302770084115814]
We introduce an innovative method of safeguarding user privacy against the generative capabilities of Neural Radiance Fields (NeRF) models.
Our novel poisoning attack method induces changes to observed views that are imperceptible to the human eye, yet potent enough to disrupt NeRF's ability to accurately reconstruct a 3D scene.
We extensively test our approach on two common NeRF benchmark datasets consisting of 29 real-world scenes with high-quality images.
arXiv Detail & Related papers (2023-10-04T19:35:56Z) - Adversarial Representation Learning for Robust Privacy Preservation in
Audio [11.409577482625053]
Sound event detection systems may inadvertently reveal sensitive information about users or their surroundings.
We propose a novel adversarial training method for learning representations of audio recordings.
The proposed method is evaluated against a baseline approach with no privacy measures and a prior adversarial training method.
arXiv Detail & Related papers (2023-04-29T08:39:55Z) - SottoVoce: An Ultrasound Imaging-Based Silent Speech Interaction Using
Deep Neural Networks [18.968402215723]
A system to detect a user's unvoiced utterance is proposed.
Our proposed system recognizes the utterance contents without the user's uttering voice.
We also observed that a user can adjust their oral movement to learn and improve the accuracy of their voice recognition.
arXiv Detail & Related papers (2023-03-03T07:46:35Z) - Defense Against Adversarial Attacks on Audio DeepFake Detection [0.4511923587827302]
Audio DeepFakes (DF) are artificially generated utterances created using deep learning.
Multiple neural network-based methods to detect generated speech have been proposed to prevent the threats.
arXiv Detail & Related papers (2022-12-30T08:41:06Z) - Privacy against Real-Time Speech Emotion Detection via Acoustic
Adversarial Evasion of Machine Learning [7.387631194438338]
DARE-GP is a solution that creates additive noise to mask users' emotional information while preserving the transcription-relevant portions of their speech.
Unlike existing works, DARE-GP provides: a) real-time protection of previously unheard utterances, b) against previously unseen black-box SER classifiers, c) while protecting speech transcription, and d) does so in a realistic, acoustic environment.
arXiv Detail & Related papers (2022-11-17T00:25:05Z) - Open-set Adversarial Defense with Clean-Adversarial Mutual Learning [93.25058425356694]
This paper demonstrates that open-set recognition systems are vulnerable to adversarial samples.
Motivated by these observations, we emphasize the necessity of an Open-Set Adversarial Defense (OSAD) mechanism.
This paper proposes an Open-Set Defense Network with Clean-Adversarial Mutual Learning (OSDN-CAML) as a solution to the OSAD problem.
arXiv Detail & Related papers (2022-02-12T02:13:55Z) - Attribute Inference Attack of Speech Emotion Recognition in Federated
Learning Settings [56.93025161787725]
Federated learning (FL) is a distributed machine learning paradigm that coordinates clients to train a model collaboratively without sharing local data.
We propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or model parameters.
We show that the attribute inference attack is achievable for SER systems trained using FL.
arXiv Detail & Related papers (2021-12-26T16:50:42Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Speaker De-identification System using Autoencoders and Adversarial
Training [58.720142291102135]
We propose a speaker de-identification system based on adversarial training and autoencoders.
Experimental results show that combining adversarial learning and autoencoders increase the equal error rate of a speaker verification system.
arXiv Detail & Related papers (2020-11-09T19:22:05Z) - An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks
in Federated Learning [82.80836918594231]
Federated learning improves privacy of training data by exchanging local gradients or parameters rather than raw data.
adversary can leverage local gradients and parameters to obtain local training data by launching reconstruction and membership inference attacks.
To defend such privacy attacks, many noises perturbation methods have been widely designed.
arXiv Detail & Related papers (2020-02-23T06:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.