Privacy-Preserving Video Classification with Convolutional Neural
Networks
- URL: http://arxiv.org/abs/2102.03513v1
- Date: Sat, 6 Feb 2021 05:05:31 GMT
- Title: Privacy-Preserving Video Classification with Convolutional Neural
Networks
- Authors: Sikha Pentyala and Rafael Dowsley and Martine De Cock
- Abstract summary: We propose a privacy-preserving implementation of single-frame method based video classification with convolutional neural networks.
We evaluate our proposed solution in an application for private human emotion recognition.
- Score: 8.51142156817993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many video classification applications require access to personal data,
thereby posing an invasive security risk to the users' privacy. We propose a
privacy-preserving implementation of single-frame method based video
classification with convolutional neural networks that allows a party to infer
a label from a video without necessitating the video owner to disclose their
video to other entities in an unencrypted manner. Similarly, our approach
removes the requirement of the classifier owner from revealing their model
parameters to outside entities in plaintext. To this end, we combine existing
Secure Multi-Party Computation (MPC) protocols for private image classification
with our novel MPC protocols for oblivious single-frame selection and secure
label aggregation across frames. The result is an end-to-end privacy-preserving
video classification pipeline. We evaluate our proposed solution in an
application for private human emotion recognition. Our results across a variety
of security settings, spanning honest and dishonest majority configurations of
the computing parties, and for both passive and active adversaries, demonstrate
that videos can be classified with state-of-the-art accuracy, and without
leaking sensitive user information.
Related papers
- PV-VTT: A Privacy-Centric Dataset for Mission-Specific Anomaly Detection and Natural Language Interpretation [5.0923114224599555]
We present PV-VTT (Privacy Violation Video To Text), a unique multimodal dataset aimed at identifying privacy violations.
PV-VTT provides detailed annotations for both video and text in scenarios.
This privacy-focused approach allows researchers to use the dataset while protecting participant confidentiality.
arXiv Detail & Related papers (2024-10-30T01:02:20Z) - CausalVE: Face Video Privacy Encryption via Causal Video Prediction [13.577971999457164]
With the proliferation of video and live-streaming websites, public-face video distribution and interactions pose greater privacy risks.
We propose a neural network framework, CausalVE, to address these shortcomings.
Our framework has good security in public video dissemination and outperforms state-of-the-art methods from a qualitative, quantitative, and visual point of view.
arXiv Detail & Related papers (2024-09-28T10:34:22Z) - PPVF: An Efficient Privacy-Preserving Online Video Fetching Framework with Correlated Differential Privacy [24.407782529925615]
We introduce a novel Privacy-Preserving Video Fetching framework to preserve user request privacy while maintaining high-quality online video services.
We use trusted edge devices to pre-fetch and cache videos, ensuring the privacy of users' requests while optimizing the efficiency of edge caching.
The results demonstrate that PPVF effectively safeguards user request privacy while upholding high video caching performance.
arXiv Detail & Related papers (2024-08-27T02:03:36Z) - NeR-VCP: A Video Content Protection Method Based on Implicit Neural Representation [7.726354287366925]
We propose an automatic encryption technique for video content protection based on implicit neural representation.
NeR-VCP first pre-distributes the key-controllable module trained by the sender to the recipients.
We experimentally find that it has superior performance in terms of visual representation, imperceptibility to illegal users, and security from a cryptographic viewpoint.
arXiv Detail & Related papers (2024-08-20T16:23:51Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Privacy Side Channels in Machine Learning Systems [87.53240071195168]
We introduce privacy side channels: attacks that exploit system-level components to extract private information.
For example, we show that deduplicating training data before applying differentially-private training creates a side-channel that completely invalidates any provable privacy guarantees.
We further show that systems which block language models from regenerating training data can be exploited to exfiltrate private keys contained in the training set.
arXiv Detail & Related papers (2023-09-11T16:49:05Z) - Differentially Private Video Activity Recognition [79.36113764129092]
We propose Multi-Clip DP-SGD, a novel framework for enforcing video-level differential privacy through clip-based classification models.
Our approach achieves 81% accuracy with a privacy budget of epsilon=5 on UCF-101, marking a 76% improvement compared to a direct application of DP-SGD.
arXiv Detail & Related papers (2023-06-27T18:47:09Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.