Information Laundering for Model Privacy
- URL: http://arxiv.org/abs/2009.06112v1
- Date: Sun, 13 Sep 2020 23:24:08 GMT
- Title: Information Laundering for Model Privacy
- Authors: Xinran Wang, Yu Xiang, Jun Gao, Jie Ding
- Abstract summary: We propose information laundering, a novel framework for enhancing model privacy.
Unlike data privacy that concerns the protection of raw data information, model privacy aims to protect an already-learned model that is to be deployed for public use.
- Score: 34.66708766179596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose information laundering, a novel framework for
enhancing model privacy. Unlike data privacy that concerns the protection of
raw data information, model privacy aims to protect an already-learned model
that is to be deployed for public use. The private model can be obtained from
general learning methods, and its deployment means that it will return a
deterministic or random response for a given input query. An
information-laundered model consists of probabilistic components that
deliberately maneuver the intended input and output for queries to the model,
so the model's adversarial acquisition is less likely. Under the proposed
framework, we develop an information-theoretic principle to quantify the
fundamental tradeoffs between model utility and privacy leakage and derive the
optimal design.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Assessing Privacy Risks in Language Models: A Case Study on
Summarization Tasks [65.21536453075275]
We focus on the summarization task and investigate the membership inference (MI) attack.
We exploit text similarity and the model's resistance to document modifications as potential MI signals.
We discuss several safeguards for training summarization models to protect against MI attacks and discuss the inherent trade-off between privacy and utility.
arXiv Detail & Related papers (2023-10-20T05:44:39Z) - Model Transparency and Interpretability : Survey and Application to the
Insurance Industry [1.6058099298620423]
This paper introduces the importance of model tackles interpretation and the notion of model transparency.
Within an insurance context, it illustrates how some tools can be used to enforce the control of actuarial models.
arXiv Detail & Related papers (2022-09-01T16:12:54Z) - Differentially Private Counterfactuals via Functional Mechanism [47.606474009932825]
We propose a novel framework to generate differentially private counterfactual (DPC) without touching the deployed model or explanation set.
In particular, we train an autoencoder with the functional mechanism to construct noisy class prototypes, and then derive the DPC from the latent prototypes.
arXiv Detail & Related papers (2022-08-04T20:31:22Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - Privacy-preserving Generative Framework Against Membership Inference
Attacks [10.791983671720882]
We design a privacy-preserving generative framework against membership inference attacks.
We first map the source data to the latent space through the VAE model to get the latent code, then perform noise process satisfying metric privacy on the latent code, and finally use the VAE model to reconstruct the synthetic data.
Our experimental evaluation demonstrates that the machine learning model trained with newly generated synthetic data can effectively resist membership inference attacks and still maintain high utility.
arXiv Detail & Related papers (2022-02-11T06:13:30Z) - Why Should I Trust a Model is Private? Using Shifts in Model Explanation
for Evaluating Privacy-Preserving Emotion Recognition Model [35.016050900061]
We focus on using interpretable methods to evaluate a model's efficacy to preserve privacy with respect to sensitive variables.
We show how certain commonly-used methods that seek to preserve privacy might not align with human perception of privacy preservation.
We conduct crowdsourcing experiments to evaluate the inclination of the evaluators to choose a particular model for a given task.
arXiv Detail & Related papers (2021-04-18T09:56:41Z) - The Influence of Dropout on Membership Inference in Differentially
Private Models [0.0]
Differentially private models seek to protect the privacy of data the model is trained on.
We conduct membership inference attacks against models with and without differential privacy.
arXiv Detail & Related papers (2021-03-16T12:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.