Dimension Reduction via Random Projection for Privacy in Multi-Agent Systems
- URL: http://arxiv.org/abs/2412.04031v3
- Date: Thu, 07 Aug 2025 08:52:43 GMT
- Title: Dimension Reduction via Random Projection for Privacy in Multi-Agent Systems
- Authors: Puspanjali Ghoshal, Ashok Singh Sairam,
- Abstract summary: In a Multi-Agent System, individual agents observe various aspects of the environment and transmit this information to a central entity.<n>In a crowd-sourced traffic monitoring system, commuters might share not only their current speed, but also sensitive information such as their location to enable more accurate route prediction.<n>We propose a novel compression-based approach leveraging the notion of robust concepts to sanitize the shared data.
- Score: 1.3812010983144802
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In a Multi-Agent System (MAS), individual agents observe various aspects of the environment and transmit this information to a central entity responsible for aggregating the data and deducing system parameters. To improve overall efficiency, agents may append certain private parameters to their observations. For example, in a crowd-sourced traffic monitoring system, commuters might share not only their current speed, but also sensitive information such as their location to enable more accurate route prediction. However, sharing such data can allow the central entity or a potential adversary to infer private details about the user, such as their daily routines. To mitigate these privacy risks, the agents sanitize the data before transmission. This sanitization inevitably results in a loss of utility. In this work, we formulate the problem as a utility-privacy trade-off and propose a novel compression-based approach leveraging the notion of robust concepts to sanitize the shared data. We further derive a bound on the norm of the compression matrix required to ensure maximal privacy while satisfying predefined utility constraints.
Related papers
- Information-theoretic Estimation of the Risk of Privacy Leaks [0.0]
dependencies between items in a dataset can lead to privacy leaks.<n>We measure the correlation between the original data and their noisy responses from a randomizer as an indicator of potential privacy breaches.
arXiv Detail & Related papers (2025-06-14T03:39:11Z) - Heavy-Tailed Privacy: The Symmetric alpha-Stable Privacy Mechanism [0.0]
We present and analyze of the Symmetric alpha-Stable (SaS) mechanism.
We prove that the mechanism achieves pure differential privacy while remaining closed under convolution.
We also study the nuanced relationship between the level of privacy achieved and the parameters of the density.
arXiv Detail & Related papers (2025-04-25T15:14:02Z) - Differentially private and decentralized randomized power method [15.955127242261808]
We propose a strategy to reduce the variance of the noise introduced to achieve Differential Privacy (DP)
We adapt the method to a decentralized framework with a low computational and communication overhead, while preserving the accuracy.
We show that it is possible to use a noise scale in the decentralized setting that is similar to the one in the centralized setting.
arXiv Detail & Related papers (2024-11-04T09:53:03Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Differentially Private Data Release on Graphs: Inefficiencies and Unfairness [48.96399034594329]
This paper characterizes the impact of Differential Privacy on bias and unfairness in the context of releasing information about networks.
We consider a network release problem where the network structure is known to all, but the weights on edges must be released privately.
Our work provides theoretical foundations and empirical evidence into the bias and unfairness arising due to privacy in these networked decision problems.
arXiv Detail & Related papers (2024-08-08T08:37:37Z) - Privacy Preserving Semi-Decentralized Mean Estimation over Intermittently-Connected Networks [59.43433767253956]
We consider the problem of privately estimating the mean of vectors distributed across different nodes of an unreliable wireless network.
In a semi-decentralized setup, nodes can collaborate with their neighbors to compute a local consensus, which they relay to a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the data sharing among nodes.
arXiv Detail & Related papers (2024-06-06T06:12:15Z) - The Symmetric alpha-Stable Privacy Mechanism [0.0]
We present novel analysis of the Symmetric alpha-Stable (SaS) mechanism.
We prove that the mechanism is purely differentially private while remaining closed under convolution.
arXiv Detail & Related papers (2023-11-29T16:34:39Z) - Towards Blockchain-Assisted Privacy-Aware Data Sharing For Edge
Intelligence: A Smart Healthcare Perspective [19.208368632576153]
Linkage attack is a type of dominant attack in the privacy domain.
adversaries launch poisoning attacks to falsify the health data, which leads to misdiagnosing or even physical damage.
To protect private health data, we propose a personalized differential privacy model based on the trust levels among users.
arXiv Detail & Related papers (2023-06-29T02:06:04Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Private Set Generation with Discriminative Information [63.851085173614]
Differentially private data generation is a promising solution to the data privacy challenge.
Existing private generative models are struggling with the utility of synthetic samples.
We introduce a simple yet effective method that greatly improves the sample utility of state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-07T10:02:55Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Secure Distributed/Federated Learning: Prediction-Privacy Trade-Off for
Multi-Agent System [4.190359509901197]
In the big data era, performing inference within the distributed and federated learning (DL and FL) frameworks, the central server needs to process a large amount of data.
Considering the decentralized computing topology, privacy has become a first-class concern.
We study the textitprivacy-aware server to multi-agent assignment problem subject to information processing constraints associated with each agent.
arXiv Detail & Related papers (2022-04-24T19:19:20Z) - Mitigating Leakage from Data Dependent Communications in Decentralized
Computing using Differential Privacy [1.911678487931003]
We propose a general execution model to control the data-dependence of communications in user-side decentralized computations.
Our formal privacy guarantees leverage and extend recent results on privacy amplification by shuffling.
arXiv Detail & Related papers (2021-12-23T08:30:17Z) - Deep Directed Information-Based Learning for Privacy-Preserving Smart
Meter Data Release [30.409342804445306]
We study the problem in the context of time series data and smart meters (SMs) power consumption measurements.
We introduce the Directed Information (DI) as a more meaningful measure of privacy in the considered setting.
Our empirical studies on real-world data sets from SMs measurements in the worst-case scenario show the existing trade-offs between privacy and utility.
arXiv Detail & Related papers (2020-11-20T13:41:11Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - BeeTrace: A Unified Platform for Secure Contact Tracing that Breaks Data
Silos [73.84437456144994]
Contact tracing is an important method to control the spread of an infectious disease such as COVID-19.
Current solutions do not utilize the huge volume of data stored in business databases and individual digital devices.
We propose BeeTrace, a unified platform that breaks data silos and deploys state-of-the-art cryptographic protocols to guarantee privacy goals.
arXiv Detail & Related papers (2020-07-05T10:33:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.