Privacy Protectability: An Information-theoretical Approach
- URL: http://arxiv.org/abs/2305.15697v1
- Date: Thu, 25 May 2023 04:06:55 GMT
- Title: Privacy Protectability: An Information-theoretical Approach
- Authors: Siping Shi and Bihai Zhang and Dan Wang
- Abstract summary: We propose a new metric, textitprivacy protectability, to characterize to what degree a video stream can be protected.
Our definition of privacy protectability is rooted in information theory and we develop efficient algorithms to estimate the metric.
- Score: 4.14084373472438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, inference privacy has attracted increasing attention. The inference
privacy concern arises most notably in the widely deployed edge-cloud video
analytics systems, where the cloud needs the videos captured from the edge. The
video data can contain sensitive information and subject to attack when they
are transmitted to the cloud for inference. Many privacy protection schemes
have been proposed. Yet, the performance of a scheme needs to be determined by
experiments or inferred by analyzing the specific case. In this paper, we
propose a new metric, \textit{privacy protectability}, to characterize to what
degree a video stream can be protected given a certain video analytics task.
Such a metric has strong operational meaning. For example, low protectability
means that it may be necessary to set up an overall secure environment. We can
also evaluate a privacy protection scheme, e.g., assume it obfuscates the video
data, what level of protection this scheme has achieved after obfuscation. Our
definition of privacy protectability is rooted in information theory and we
develop efficient algorithms to estimate the metric. We use experiments on real
data to validate that our metric is consistent with empirical measurements on
how well a video stream can be protected for a video analytics task.
Related papers
- PV-VTT: A Privacy-Centric Dataset for Mission-Specific Anomaly Detection and Natural Language Interpretation [5.0923114224599555]
We present PV-VTT (Privacy Violation Video To Text), a unique multimodal dataset aimed at identifying privacy violations.
PV-VTT provides detailed annotations for both video and text in scenarios.
This privacy-focused approach allows researchers to use the dataset while protecting participant confidentiality.
arXiv Detail & Related papers (2024-10-30T01:02:20Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Can Language Models be Instructed to Protect Personal Information? [30.187731765653428]
We introduce PrivQA -- a benchmark to assess the privacy/utility trade-off when a model is instructed to protect specific categories of personal information in a simulated scenario.
We find that adversaries can easily circumvent these protections with simple jailbreaking methods through textual and/or image inputs.
We believe PrivQA has the potential to support the development of new models with improved privacy protections, as well as the adversarial robustness of these protections.
arXiv Detail & Related papers (2023-10-03T17:30:33Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z) - Robust Privacy-Preserving Motion Detection and Object Tracking in
Encrypted Streaming Video [39.453548972987015]
We propose an efficient and robust privacy-preserving motion detection and multiple object tracking scheme for encrypted surveillance video bitstreams.
Our scheme achieves the best detection and tracking performance compared with existing works in the encrypted and compressed domain.
Our scheme can be effectively used in complex surveillance scenarios with different challenges, such as camera movement/jitter, dynamic background, and shadows.
arXiv Detail & Related papers (2021-08-30T11:58:19Z) - Learning With Differential Privacy [3.618133010429131]
Differential privacy comes to the rescue with a proper promise of protection against leakage.
It uses a randomized response technique at the time of collection of the data which promises strong privacy with better utility.
arXiv Detail & Related papers (2020-06-10T02:04:13Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z) - Privacy for Rescue: A New Testimony Why Privacy is Vulnerable In Deep
Models [6.902994369582068]
We present a formal definition of the privacy protection problem in the edge-cloud system running models.
We analyze the-state-of-the-art methods and point out the drawbacks of their methods.
We propose two new metrics that are more accurate to measure the effectiveness of privacy protection methods.
arXiv Detail & Related papers (2019-12-31T15:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.