Differentially Private Multi-Agent Planning for Logistic-like Problems
- URL: http://arxiv.org/abs/2008.06832v1
- Date: Sun, 16 Aug 2020 03:43:09 GMT
- Title: Differentially Private Multi-Agent Planning for Logistic-like Problems
- Authors: Dayong Ye and Tianqing Zhu and Sheng Shen and Wanlei Zhou and Philip
S. Yu
- Abstract summary: This paper proposes a novel strong privacy-preserving planning approach for logistic-like problems.
Two challenges are addressed: 1) simultaneously achieving strong privacy, completeness and efficiency, and 2) addressing communication constraints.
To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning.
- Score: 70.3758644421664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Planning is one of the main approaches used to improve agents' working
efficiency by making plans beforehand. However, during planning, agents face
the risk of having their private information leaked. This paper proposes a
novel strong privacy-preserving planning approach for logistic-like problems.
This approach outperforms existing approaches by addressing two challenges: 1)
simultaneously achieving strong privacy, completeness and efficiency, and 2)
addressing communication constraints. These two challenges are prevalent in
many real-world applications including logistics in military environments and
packet routing in networks. To tackle these two challenges, our approach adopts
the differential privacy technique, which can both guarantee strong privacy and
control communication overhead. To the best of our knowledge, this paper is the
first to apply differential privacy to the field of multi-agent planning as a
means of preserving the privacy of agents for logistic-like problems. We
theoretically prove the strong privacy and completeness of our approach and
empirically demonstrate its efficiency. We also theoretically analyze the
communication overhead of our approach and illustrate how differential privacy
can be used to control it.
Related papers
- Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Centering Policy and Practice: Research Gaps around Usable Differential Privacy [12.340264479496375]
We argue that while differential privacy is a clean formulation in theory, it poses significant challenges in practice.
To bridge the gaps between differential privacy's promises and its real-world usability, researchers and practitioners must work together.
arXiv Detail & Related papers (2024-06-17T21:32:30Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Killing Two Birds with One Stone: Quantization Achieves Privacy in
Distributed Learning [18.824571167583432]
Communication efficiency and privacy protection are critical issues in distributed machine learning.
We propose a comprehensive quantization-based solution that could simultaneously achieve communication efficiency and privacy protection.
We theoretically capture the new trade-offs between communication, privacy, and learning performance.
arXiv Detail & Related papers (2023-04-26T13:13:04Z) - Privacy-Preserving Communication-Efficient Federated Multi-Armed Bandits [17.039484057126337]
Communication bottleneck and data privacy are two critical concerns in federated multi-armed bandit (MAB) problems.
We design the privacy-preserving communication-efficient algorithm in such problems and study the interactions among privacy, communication and learning performance in terms of the regret.
arXiv Detail & Related papers (2021-11-02T12:56:12Z) - On Privacy and Confidentiality of Communications in Organizational
Graphs [3.5270468102327004]
This work shows how confidentiality is distinct from privacy in an enterprise context.
It aims to formulate an approach to preserving confidentiality while leveraging principles from differential privacy.
arXiv Detail & Related papers (2021-05-27T19:45:56Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.