Privid: Practical, Privacy-Preserving Video Analytics Queries
- URL: http://arxiv.org/abs/2106.12083v1
- Date: Tue, 22 Jun 2021 22:25:08 GMT
- Title: Privid: Practical, Privacy-Preserving Video Analytics Queries
- Authors: Frank Cangialosi, Neil Agarwal, Venkat Arun, Junchen Jiang, Srinivas
Narayana, Anand Sarwate and Ravi Netravali
- Abstract summary: This paper presents a new notion of differential privacy (DP) for video analytics, $(rho,K,epsilon)$-event-duration privacy.
We show that Privid achieves accuracies within 79-99% of a non-private system.
- Score: 6.7897713298300335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Analytics on video recorded by cameras in public areas have the potential to
fuel many exciting applications, but also pose the risk of intruding on
individuals' privacy. Unfortunately, existing solutions fail to practically
resolve this tension between utility and privacy, relying on perfect detection
of all private information in each video frame--an elusive requirement. This
paper presents: (1) a new notion of differential privacy (DP) for video
analytics, $(\rho,K,\epsilon)$-event-duration privacy, which protects all
private information visible for less than a particular duration, rather than
relying on perfect detections of that information, and (2) a practical system
called Privid that enforces duration-based privacy even with the (untrusted)
analyst-provided deep neural networks that are commonplace for video analytics
today. Across a variety of videos and queries, we show that Privid achieves
accuracies within 79-99% of a non-private system.
Related papers
- Adaptive Privacy Composition for Accuracy-first Mechanisms [55.53725113597539]
Noise reduction mechanisms produce increasingly accurate answers.
Analysts only pay the privacy cost of the least noisy or most accurate answer released.
There has yet to be any study on how ex-post private mechanisms compose.
We develop privacy filters that allow an analyst to adaptively switch between differentially private and ex-post private mechanisms.
arXiv Detail & Related papers (2023-06-24T00:33:34Z) - Privacy Protectability: An Information-theoretical Approach [4.14084373472438]
We propose a new metric, textitprivacy protectability, to characterize to what degree a video stream can be protected.
Our definition of privacy protectability is rooted in information theory and we develop efficient algorithms to estimate the metric.
arXiv Detail & Related papers (2023-05-25T04:06:55Z) - Fairly Private: Investigating The Fairness of Visual Privacy
Preservation Algorithms [1.5293427903448025]
This paper investigates the fairness of commonly used visual privacy preservation algorithms.
Experiments on the PubFig dataset clearly show that the privacy protection provided is unequal across groups.
arXiv Detail & Related papers (2023-01-12T13:40:38Z) - STPrivacy: Spatio-Temporal Tubelet Sparsification and Anonymization for
Privacy-preserving Action Recognition [28.002605566359676]
We present a PPAR paradigm, i.e. spatial, performing privacy preservation from both temporal perspectives, and propose a STPrivacy framework.
For first time, our STPrivacy applies vision Transformers to PPAR and regards video as sequence of leakage-temporal tubelets.
Because there is no large-scale benchmarks, we annotate five privacy attributes for two of the most popular action recognition datasets.
arXiv Detail & Related papers (2023-01-08T14:07:54Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z) - "I need a better description'': An Investigation Into User Expectations
For Differential Privacy [31.352325485393074]
We explore users' privacy expectations related to differential privacy.
We find that users care about the kinds of information leaks against which differential privacy protects.
We find that the ways in which differential privacy is described in-the-wild haphazardly set users' privacy expectations.
arXiv Detail & Related papers (2021-10-13T02:36:37Z) - Deep Learning Approach Protecting Privacy in Camera-Based Critical
Applications [57.93313928219855]
We propose a deep learning approach towards protecting privacy in camera-based systems.
Our technique distinguishes between salient (visually prominent) and non-salient objects based on the intuition that the latter is unlikely to be needed by the application.
arXiv Detail & Related papers (2021-10-04T19:16:27Z) - Applications of Differential Privacy in Social Network Analysis: A
Survey [60.696428840516724]
Differential privacy is effective in sharing information and preserving privacy with a strong guarantee.
Social network analysis has been extensively adopted in many applications, opening a new arena for the application of differential privacy.
arXiv Detail & Related papers (2020-10-06T19:06:03Z) - Auditing Differentially Private Machine Learning: How Private is Private
SGD? [16.812900569416062]
We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis.
We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks.
arXiv Detail & Related papers (2020-06-13T20:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.