Differentially Private Integrated Decision Gradients (IDG-DP) for Radar-based Human Activity Recognition
- URL: http://arxiv.org/abs/2411.02099v2
- Date: Thu, 07 Nov 2024 10:53:14 GMT
- Title: Differentially Private Integrated Decision Gradients (IDG-DP) for Radar-based Human Activity Recognition
- Authors: Idris Zakariyya, Linda Tran, Kaushik Bhargav Sivangi, Paul Henderson, Fani Deligianni,
- Abstract summary: Recent research has shown high accuracy in recognizing subjects or gender from radar gait patterns, raising privacy concerns.
This study addresses these issues by investigating privacy vulnerabilities in radar-based Human Activity Recognition (HAR) systems.
We propose a novel method for privacy preservation using Differential Privacy (DP) driven by attributions derived with Integrated Decision Gradient (IDG) algorithm.
- Score: 5.955900146668931
- License:
- Abstract: Human motion analysis offers significant potential for healthcare monitoring and early detection of diseases. The advent of radar-based sensing systems has captured the spotlight for they are able to operate without physical contact and they can integrate with pre-existing Wi-Fi networks. They are also seen as less privacy-invasive compared to camera-based systems. However, recent research has shown high accuracy in recognizing subjects or gender from radar gait patterns, raising privacy concerns. This study addresses these issues by investigating privacy vulnerabilities in radar-based Human Activity Recognition (HAR) systems and proposing a novel method for privacy preservation using Differential Privacy (DP) driven by attributions derived with Integrated Decision Gradient (IDG) algorithm. We investigate Black-box Membership Inference Attack (MIA) Models in HAR settings across various levels of attacker-accessible information. We extensively evaluated the effectiveness of the proposed IDG-DP method by designing a CNN-based HAR model and rigorously assessing its resilience against MIAs. Experimental results demonstrate the potential of IDG-DP in mitigating privacy attacks while maintaining utility across all settings, particularly excelling against label-only and shadow model black-box MIA attacks. This work represents a crucial step towards balancing the need for effective radar-based HAR with robust privacy protection in healthcare environments.
Related papers
- Federated Anomaly Detection for Early-Stage Diagnosis of Autism Spectrum Disorders using Serious Game Data [0.0]
This study presents a novel semi-supervised approach for ASD detection using AutoEncoder-based Machine Learning (ML) methods.
Our approach utilizes data collected manually through a serious game specifically designed for this purpose.
Since the sensitive data collected by the gamified application are susceptible to privacy leakage, we developed a Federated Learning framework.
arXiv Detail & Related papers (2024-10-25T23:00:12Z) - POMDP-Driven Cognitive Massive MIMO Radar: Joint Target Detection-Tracking In Unknown Disturbances [42.99053410696693]
This work explores the application of a Partially Observable Markov Decision Process framework to enhance the tracking and detection tasks.
The proposed approach employs an online algorithm that does not require any apriori knowledge of the noise statistics.
arXiv Detail & Related papers (2024-10-23T15:34:11Z) - Privacy-Preserving Heterogeneous Federated Learning for Sensitive Healthcare Data [12.30620268528346]
We propose a new framework termed Abstention-Aware Federated Voting (AAFV)
AAFV can collaboratively and confidentially train heterogeneous local models while simultaneously protecting the data privacy.
In particular, the proposed abstention-aware voting mechanism exploits a threshold-based abstention method to select high-confidence votes from heterogeneous local models.
arXiv Detail & Related papers (2024-06-15T08:43:40Z) - Privacy-Preserving State Estimation in the Presence of Eavesdroppers: A Survey [10.366696004684822]
Networked systems are increasingly the target of cyberattacks.
Eavesdropping attacks aim to infer information by collecting system data and exploiting it for malicious purposes.
It is crucial to protect disclosed system data to avoid an accurate state estimation by eavesdroppers.
arXiv Detail & Related papers (2024-02-24T06:32:07Z) - AdvGPS: Adversarial GPS for Multi-Agent Perception Attack [47.59938285740803]
This study investigates whether specific GPS signals can easily mislead the multi-agent perception system.
We introduce textscAdvGPS, a method capable of generating adversarial GPS signals which are also stealthy for individual agents within the system.
Our experiments on the OPV2V dataset demonstrate that these attacks substantially undermine the performance of state-of-the-art methods.
arXiv Detail & Related papers (2024-01-30T23:13:41Z) - GaitGuard: Towards Private Gait in Mixed Reality [3.2392550445029396]
GaitGuard is the first real-time framework designed to protect the privacy of gait features within the camera view of AR/MR devices.
GaitGuard reduces the risk of identification by up to 68%, while maintaining a minimal latency of merely 118.77 ms.
arXiv Detail & Related papers (2023-12-07T17:42:04Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - DensePose From WiFi [86.61881052177228]
We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
arXiv Detail & Related papers (2022-12-31T16:48:43Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.