InfoDecom: Decomposing Information for Defending against Privacy Leakage in Split Inference
- URL: http://arxiv.org/abs/2511.13365v1
- Date: Mon, 17 Nov 2025 13:36:40 GMT
- Title: InfoDecom: Decomposing Information for Defending against Privacy Leakage in Split Inference
- Authors: Ruijun Deng, Zhihui Lu, Qiang Duan,
- Abstract summary: Split inference (SI) enables users to access deep learning (DL) services without directly transmitting raw data.<n>Data reconstruction attacks (DRAs) can recover the original inputs from the smashed data sent from the client to the server, leading to significant privacy leakage.<n>We propose InfoDecom, a defense framework that first decomposes and removes redundant information and then injects noise calibrated to provide theoretically guaranteed privacy.
- Score: 1.012886157573601
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Split inference (SI) enables users to access deep learning (DL) services without directly transmitting raw data. However, recent studies reveal that data reconstruction attacks (DRAs) can recover the original inputs from the smashed data sent from the client to the server, leading to significant privacy leakage. While various defenses have been proposed, they often result in substantial utility degradation, particularly when the client-side model is shallow. We identify a key cause of this trade-off: existing defenses apply excessive perturbation to redundant information in the smashed data. To address this issue in computer vision tasks, we propose InfoDecom, a defense framework that first decomposes and removes redundant information and then injects noise calibrated to provide theoretically guaranteed privacy. Experiments demonstrate that InfoDecom achieves a superior utility-privacy trade-off compared to existing baselines. The code and the appendix are available at https://github.com/SASA-cloud/InfoDecom.
Related papers
- SpooFL: Spoofing Federated Learning [54.05993847488204]
We introduce a spoofing-based defense that deceives attackers into believing they have recovered the true training data.<n>Unlike prior synthetic-data defenses that share classes or distributions with the private data, SFL uses a state-of-the-art generative model trained on an external dataset.<n>As a result, attackers are misled into recovering plausible yet completely irrelevant samples, preventing meaningful data leakage.
arXiv Detail & Related papers (2026-01-21T14:57:18Z) - Byzantine Outside, Curious Inside: Reconstructing Data Through Malicious Updates [36.2911560725828]
Federated learning (FL) enables decentralized machine learning without sharing raw data.<n>Privacy leakage is possible under commonly adopted FL protocols.<n>We introduce a novel threat model in FL, named the maliciously curious client.
arXiv Detail & Related papers (2025-06-13T02:23:41Z) - Quantifying Privacy Leakage in Split Inference via Fisher-Approximated Shannon Information Analysis [1.4972864129791565]
Split inference (SI) partitions deep neural networks into distributed sub-models, enabling privacy-preserving collaborative learning.<n>Despite extensive research on adversarial attack-defense games, a shortfall remains in the fundamental analysis of privacy risks.<n>This paper establishes a theoretical framework for privacy leakage quantification using information theory, defining it as the adversary's certainty and deriving both average-case and worst-case error bounds.
arXiv Detail & Related papers (2025-04-14T09:19:06Z) - CRFU: Compressive Representation Forgetting Against Privacy Leakage on Machine Unlearning [14.061404670832097]
An effective unlearning method removes the information of the specified data from the trained model, resulting in different outputs for the same input before and after unlearning.<n>We introduce a Compressive Representation Forgetting Unlearning scheme (CRFU) to safeguard against privacy leakage on unlearning.
arXiv Detail & Related papers (2025-02-27T05:59:02Z) - A Stealthy Wrongdoer: Feature-Oriented Reconstruction Attack against Split Learning [14.110303634976272]
Split Learning (SL) is a distributed learning framework renowned for its privacy-preserving features and minimal computational requirements.<n>Previous research consistently highlights the potential privacy breaches in SL systems by server adversaries reconstructing training data.<n>This paper introduces a new semi-honest Data Reconstruction Attack on SL, named Feature-Oriented Reconstruction Attack (FORA)
arXiv Detail & Related papers (2024-05-07T08:38:35Z) - Defending Our Privacy With Backdoors [29.722113621868978]
We propose an easy yet effective defense based on backdoor attacks to remove private information from vision-language models.
Specifically, by strategically inserting backdoors into text encoders, we align the embeddings of sensitive phrases with those of neutral terms-"a person" instead of the person's actual name.
Our approach provides a new "dual-use" perspective on backdoor attacks and presents a promising avenue to enhance the privacy of individuals within models trained on uncurated web-scraped data.
arXiv Detail & Related papers (2023-10-12T13:33:04Z) - Understanding Deep Gradient Leakage via Inversion Influence Functions [53.1839233598743]
Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors.
We propose a novel Inversion Influence Function (I$2$F) that establishes a closed-form connection between the recovered images and the private gradients.
We empirically demonstrate that I$2$F effectively approximated the DGL generally on different model architectures, datasets, attack implementations, and perturbation-based defenses.
arXiv Detail & Related papers (2023-09-22T17:26:24Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Hiding Visual Information via Obfuscating Adversarial Perturbations [47.315523613407244]
We propose an adversarial visual information hiding method to protect the visual privacy of data.
Specifically, the method generates obfuscating adversarial perturbations to obscure the visual information of the data.
Experimental results on the recognition and classification tasks demonstrate that the proposed method can effectively hide visual information.
arXiv Detail & Related papers (2022-09-30T08:23:26Z) - The Privacy Onion Effect: Memorization is Relative [76.46529413546725]
We show an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable exposes a new layer of previously-safe points to the same attack.
It suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users.
arXiv Detail & Related papers (2022-06-21T15:25:56Z) - TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework
for Deep Learning with Anonymized Intermediate Representations [49.20701800683092]
We present TIPRDC, a task-independent privacy-respecting data crowdsourcing framework with anonymized intermediate representation.
The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.
arXiv Detail & Related papers (2020-05-23T06:21:26Z) - Privacy in Deep Learning: A Survey [16.278779275923448]
The ever-growing advances of deep learning in many areas have led to the adoption of Deep Neural Networks (DNNs) in production systems.
The availability of large datasets and high computational power are the main contributors to these advances.
This poses serious privacy concerns as this data can be misused or leaked through various vulnerabilities.
arXiv Detail & Related papers (2020-04-25T23:47:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.