Disentangled Information Bottleneck guided Privacy-Protective JSCC for Image Transmission
- URL: http://arxiv.org/abs/2309.10263v1
- Date: Tue, 19 Sep 2023 02:37:53 GMT
- Title: Disentangled Information Bottleneck guided Privacy-Protective JSCC for Image Transmission
- Authors: Lunan Sun, Yang Yang, Mingzhe Chen, Caili Guo,
- Abstract summary: Joint source and channel coding (JSCC) has attracted increasing attention due to its robustness and high efficiency.
In this paper, we propose a privacy-protective JSCC (DIB-PPJSCC) for image transmission.
We employ a private information encryptor to encrypt the private subcodewords before transmission, and a corresponding decryptor to recover the private information at the legitimate receiver.
- Score: 27.929075969353764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Joint source and channel coding (JSCC) has attracted increasing attention due to its robustness and high efficiency. However, JSCC is vulnerable to privacy leakage due to the high relevance between the source image and channel input. In this paper, we propose a disentangled information bottleneck guided privacy-protective JSCC (DIB-PPJSCC) for image transmission, which aims at protecting private information as well as achieving superior communication performance at the legitimate receiver. In particular, we propose a DIB objective to disentangle private and public information. The goal is to compress the private information in the public subcodewords, preserve the private information in the private subcodewords and improve the reconstruction quality simultaneously. In order to optimize JSCC neural networks using the DIB objective, we derive a differentiable estimation of the DIB objective based on the variational approximation and the density-ratio trick. Additionally, we design a password-based privacy-protective (PP) algorithm which can be jointly optimized with JSCC neural networks to encrypt the private subcodewords. Specifically, we employ a private information encryptor to encrypt the private subcodewords before transmission, and a corresponding decryptor to recover the private information at the legitimate receiver. A loss function for jointly training the encryptor, decryptor and JSCC decoder is derived based on the maximum entropy principle, which aims at maximizing the eavesdropping uncertainty as well as improving the reconstruction quality. Experimental results show that DIB-PPJSCC can reduce the eavesdropping accuracy on private information up to $15\%$ and reduce $10\%$ inference time compared to existing privacy-protective JSCC and traditional separate methods.
Related papers
- Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Inferentially-Private Private Information [34.529977090471924]
Information disclosure can compromise privacy when revealed information is correlated with private information.
We consider the notion of inferential privacy, which measures privacy leakage by bounding the inferential power a Bayesian adversary can gain by observing a released signal.
Our goal is to devise an inferentially-private private information structure that maximizes the informativeness of the released signal.
arXiv Detail & Related papers (2024-10-22T15:21:00Z) - Privacy-Aware Joint Source-Channel Coding for image transmission based on Disentangled Information Bottleneck [27.929075969353764]
Current privacy-aware joint source-channel coding (JSCC) works aim at avoiding private information transmission by adversarially training the J SCC encoder and decoder.
We propose a novel privacy-aware J SCC based on disentangled information bottleneck (DIB-PAJSCC)
We show that DIB-PAJSCC can reduce the eavesdropping accuracy on private information by up to 20% compared to existing methods.
arXiv Detail & Related papers (2023-09-15T06:34:22Z) - Secure Deep-JSCC Against Multiple Eavesdroppers [13.422085141752468]
We propose an end-to-end (E2E) learning-based approach for secure communication against multiple eavesdroppers.
We implement deep neural networks (DNNs) to realize a data-driven secure communication scheme.
Our experiments show that employing the proposed secure neural encoding can decrease the adversarial accuracy by 28%.
arXiv Detail & Related papers (2023-08-05T14:40:35Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Large-Scale Privacy-Preserving Network Embedding against Private Link
Inference Attacks [12.434976161956401]
We address a novel problem of privacy-preserving network embedding against private link inference attacks.
We propose to perturb the original network by adding or removing links, and expect the embedding generated on the perturbed network can leak little information about private links but hold high utility for various downstream tasks.
arXiv Detail & Related papers (2022-05-28T13:59:39Z) - Federated Deep Learning with Bayesian Privacy [28.99404058773532]
Federated learning (FL) aims to protect data privacy by cooperatively learning a model without sharing private data among users.
Homomorphic encryption (HE) based methods provide secure privacy protections but suffer from extremely high computational and communication overheads.
Deep learning with Differential Privacy (DP) was implemented as a practical learning algorithm at a manageable cost in complexity.
arXiv Detail & Related papers (2021-09-27T12:48:40Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.