Privid: Practical, Privacy-Preserving Video Analytics Queries
- URL: http://arxiv.org/abs/2106.12083v1
- Date: Tue, 22 Jun 2021 22:25:08 GMT
- Title: Privid: Practical, Privacy-Preserving Video Analytics Queries
- Authors: Frank Cangialosi, Neil Agarwal, Venkat Arun, Junchen Jiang, Srinivas
Narayana, Anand Sarwate and Ravi Netravali
- Abstract summary: This paper presents a new notion of differential privacy (DP) for video analytics, $(rho,K,epsilon)$-event-duration privacy.
We show that Privid achieves accuracies within 79-99% of a non-private system.
- Score: 6.7897713298300335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Analytics on video recorded by cameras in public areas have the potential to
fuel many exciting applications, but also pose the risk of intruding on
individuals' privacy. Unfortunately, existing solutions fail to practically
resolve this tension between utility and privacy, relying on perfect detection
of all private information in each video frame--an elusive requirement. This
paper presents: (1) a new notion of differential privacy (DP) for video
analytics, $(\rho,K,\epsilon)$-event-duration privacy, which protects all
private information visible for less than a particular duration, rather than
relying on perfect detections of that information, and (2) a practical system
called Privid that enforces duration-based privacy even with the (untrusted)
analyst-provided deep neural networks that are commonplace for video analytics
today. Across a variety of videos and queries, we show that Privid achieves
accuracies within 79-99% of a non-private system.
Related papers
- PV-VTT: A Privacy-Centric Dataset for Mission-Specific Anomaly Detection and Natural Language Interpretation [5.0923114224599555]
We present PV-VTT (Privacy Violation Video To Text), a unique multimodal dataset aimed at identifying privacy violations.
PV-VTT provides detailed annotations for both video and text in scenarios.
This privacy-focused approach allows researchers to use the dataset while protecting participant confidentiality.
arXiv Detail & Related papers (2024-10-30T01:02:20Z) - Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - PPVF: An Efficient Privacy-Preserving Online Video Fetching Framework with Correlated Differential Privacy [24.407782529925615]
We introduce a novel Privacy-Preserving Video Fetching framework to preserve user request privacy while maintaining high-quality online video services.
We use trusted edge devices to pre-fetch and cache videos, ensuring the privacy of users' requests while optimizing the efficiency of edge caching.
The results demonstrate that PPVF effectively safeguards user request privacy while upholding high video caching performance.
arXiv Detail & Related papers (2024-08-27T02:03:36Z) - Adaptive Privacy Composition for Accuracy-first Mechanisms [55.53725113597539]
Noise reduction mechanisms produce increasingly accurate answers.
Analysts only pay the privacy cost of the least noisy or most accurate answer released.
There has yet to be any study on how ex-post private mechanisms compose.
We develop privacy filters that allow an analyst to adaptively switch between differentially private and ex-post private mechanisms.
arXiv Detail & Related papers (2023-06-24T00:33:34Z) - Privacy Protectability: An Information-theoretical Approach [4.14084373472438]
We propose a new metric, textitprivacy protectability, to characterize to what degree a video stream can be protected.
Our definition of privacy protectability is rooted in information theory and we develop efficient algorithms to estimate the metric.
arXiv Detail & Related papers (2023-05-25T04:06:55Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z) - "I need a better description'': An Investigation Into User Expectations
For Differential Privacy [31.352325485393074]
We explore users' privacy expectations related to differential privacy.
We find that users care about the kinds of information leaks against which differential privacy protects.
We find that the ways in which differential privacy is described in-the-wild haphazardly set users' privacy expectations.
arXiv Detail & Related papers (2021-10-13T02:36:37Z) - Deep Learning Approach Protecting Privacy in Camera-Based Critical
Applications [57.93313928219855]
We propose a deep learning approach towards protecting privacy in camera-based systems.
Our technique distinguishes between salient (visually prominent) and non-salient objects based on the intuition that the latter is unlikely to be needed by the application.
arXiv Detail & Related papers (2021-10-04T19:16:27Z) - Auditing Differentially Private Machine Learning: How Private is Private
SGD? [16.812900569416062]
We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis.
We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks.
arXiv Detail & Related papers (2020-06-13T20:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.