Grad-FEC: Unequal Loss Protection of Deep Features in Collaborative
Intelligence
- URL: http://arxiv.org/abs/2307.01846v1
- Date: Tue, 4 Jul 2023 17:49:46 GMT
- Title: Grad-FEC: Unequal Loss Protection of Deep Features in Collaborative
Intelligence
- Authors: Korcan Uyanik, S. Faegheh Yeganli, Ivan V. Baji\'c
- Abstract summary: Collaborative intelligence (CI) involves dividing an artificial intelligence (AI) model into two parts: front-end, to be deployed on an edge device, and back-end, to be deployed in the cloud.
The deep feature tensors produced by the front-end are transmitted to the cloud through a communication channel, which may be subject to packet loss.
We propose a novel approach to enhance the resilience of the CI system in the presence of packet loss through Unequal Loss Protection (ULP)
- Score: 27.135997578218486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Collaborative intelligence (CI) involves dividing an artificial intelligence
(AI) model into two parts: front-end, to be deployed on an edge device, and
back-end, to be deployed in the cloud. The deep feature tensors produced by the
front-end are transmitted to the cloud through a communication channel, which
may be subject to packet loss. To address this issue, in this paper, we propose
a novel approach to enhance the resilience of the CI system in the presence of
packet loss through Unequal Loss Protection (ULP). The proposed ULP approach
involves a feature importance estimator, which estimates the importance of
feature packets produced by the front-end, and then selectively applies Forward
Error Correction (FEC) codes to protect important packets. Experimental results
demonstrate that the proposed approach can significantly improve the
reliability and robustness of the CI system in the presence of packet loss.
Related papers
- Enhancing Privacy in Semantic Communication over Wiretap Channels leveraging Differential Privacy [51.028047763426265]
Semantic communication (SemCom) improves transmission efficiency by focusing on task-relevant information.
transmitting semantic-rich data over insecure channels introduces privacy risks.
This paper proposes a novel SemCom framework that integrates differential privacy mechanisms to protect sensitive semantic features.
arXiv Detail & Related papers (2025-04-23T08:42:44Z) - Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems [89.35169042718739]
collaborative inference enables end users to leverage powerful deep learning models without exposure of sensitive raw data to cloud servers.
Recent studies have revealed that these intermediate features may not sufficiently preserve privacy, as information can be leaked and raw data can be reconstructed via model inversion attacks (MIAs)
This work first theoretically proves that the conditional entropy of inputs given intermediate features provides a guaranteed lower bound on the reconstruction mean square error (MSE) under any MIA.
Then, we derive a differentiable and solvable measure for bounding this conditional entropy based on the Gaussian mixture estimation and propose a conditional entropy algorithm to enhance the inversion robustness
arXiv Detail & Related papers (2025-03-01T07:15:21Z) - How Breakable Is Privacy: Probing and Resisting Model Inversion Attacks in Collaborative Inference [9.092229145160763]
Collaborative inference improves computational efficiency for edge devices by transmitting intermediate features to cloud models.
There is no established criterion for assessing the difficulty of model inversion attacks (MIAs)
We propose the first theoretical criterion to assess MIA difficulty in CI, identifying mutual information, entropy, and effective information volume as key influencing factors.
arXiv Detail & Related papers (2025-01-01T13:00:01Z) - A Memory-Based Reinforcement Learning Approach to Integrated Sensing and Communication [52.40430937325323]
We consider a point-to-point integrated sensing and communication (ISAC) system, where a transmitter conveys a message to a receiver over a channel with memory.
We formulate the capacity-distortion tradeoff for the ISAC problem when sensing is performed in an online fashion.
arXiv Detail & Related papers (2024-12-02T03:30:50Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Privacy-Preserving Distributed Learning for Residential Short-Term Load
Forecasting [11.185176107646956]
Power system load data can inadvertently reveal the daily routines of residential users, posing a risk to their property security.
We introduce a Markovian Switching-based distributed training framework, the convergence of which is substantiated through rigorous theoretical analysis.
Case studies employing real-world power system load data validate the efficacy of our proposed algorithm.
arXiv Detail & Related papers (2024-02-02T16:39:08Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - PATROL: Privacy-Oriented Pruning for Collaborative Inference Against
Model Inversion Attacks [15.257413246220032]
Collaborative inference is a promising solution to enable resource-constrained edge devices to perform inference using state-of-the-art deep neural networks (DNNs)
Recent research indicates model inversion attacks (MIAs) can reconstruct input data from intermediate results, posing serious privacy concerns for collaborative inference.
This paper provides a viable solution, named PATROL, which develops privacy-oriented pruning to balance privacy, efficiency, and utility of collaborative inference.
arXiv Detail & Related papers (2023-07-20T16:09:07Z) - Integrated Sensing, Computation, and Communication for UAV-assisted
Federated Edge Learning [52.7230652428711]
Federated edge learning (FEEL) enables privacy-preserving model training through periodic communication between edge devices and the server.
Unmanned Aerial Vehicle (UAV)mounted edge devices are particularly advantageous for FEEL due to their flexibility and mobility in efficient data collection.
arXiv Detail & Related papers (2023-06-05T16:01:33Z) - Deep PackGen: A Deep Reinforcement Learning Framework for Adversarial
Network Packet Generation [3.5574619538026044]
Recent advancements in artificial intelligence (AI) and machine learning (ML) algorithms have enhanced the security posture of cybersecurity operations centers (defenders)
Recent studies have found that the perturbation of flow-based and packet-based features can deceive ML models, but these approaches have limitations.
Our framework, Deep PackGen, employs deep reinforcement learning to generate adversarial packets and aims to overcome the limitations of approaches in the literature.
arXiv Detail & Related papers (2023-05-18T15:32:32Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Task-Oriented Sensing, Computation, and Communication Integration for
Multi-Device Edge AI [108.08079323459822]
This paper studies a new multi-intelligent edge artificial-latency (AI) system, which jointly exploits the AI model split inference and integrated sensing and communication (ISAC)
We measure the inference accuracy by adopting an approximate but tractable metric, namely discriminant gain.
arXiv Detail & Related papers (2022-07-03T06:57:07Z) - Loss Tolerant Federated Learning [6.595005044268588]
In this paper, we explore the loss tolerant federated learning (LT-FL) in terms of aggregation, fairness, and personalization.
We use ThrowRightAway (TRA) to accelerate the data uploading for low-bandwidth-devices by intentionally ignoring some packet losses.
The results suggest that, with proper integration, TRA and other algorithms can together guarantee the personalization and fairness performance in the face of packet loss below a certain fraction.
arXiv Detail & Related papers (2021-05-08T04:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.