Differentially Private Federated Combinatorial Bandits with Constraints
- URL: http://arxiv.org/abs/2206.13192v2
- Date: Sun, 28 May 2023 11:23:15 GMT
- Title: Differentially Private Federated Combinatorial Bandits with Constraints
- Authors: Sambhav Solanki, Samhita Kanaparthy, Sankarshan Damle, Sujit Gujar
- Abstract summary: This work investigates a group of agents working concurrently to solve similar bandit problems while maintaining quality constraints.
We show that our algorithm provides an improvement in terms of regret while upholding quality threshold and meaningful privacy guarantees.
- Score: 8.390356883529172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a rapid increase in the cooperative learning paradigm in online
learning settings, i.e., federated learning (FL). Unlike most FL settings,
there are many situations where the agents are competitive. Each agent would
like to learn from others, but the part of the information it shares for others
to learn from could be sensitive; thus, it desires its privacy. This work
investigates a group of agents working concurrently to solve similar
combinatorial bandit problems while maintaining quality constraints. Can these
agents collectively learn while keeping their sensitive information
confidential by employing differential privacy? We observe that communicating
can reduce the regret. However, differential privacy techniques for protecting
sensitive information makes the data noisy and may deteriorate than help to
improve regret. Hence, we note that it is essential to decide when to
communicate and what shared data to learn to strike a functional balance
between regret and privacy. For such a federated combinatorial MAB setting, we
propose a Privacy-preserving Federated Combinatorial Bandit algorithm, P-FCB.
We illustrate the efficacy of P-FCB through simulations. We further show that
our algorithm provides an improvement in terms of regret while upholding
quality threshold and meaningful privacy guarantees.
Related papers
- Group Decision-Making among Privacy-Aware Agents [2.4401219403555814]
Preserving individual privacy and enabling efficient social learning are both important desiderata but seem fundamentally at odds with each other.
We do so by controlling information leakage using rigorous statistical guarantees that are based on differential privacy (DP)
Our results flesh out the nature of the trade-offs in both cases between the quality of the group decision outcomes, learning accuracy, communication cost, and the level of privacy protections that the agents are afforded.
arXiv Detail & Related papers (2024-02-13T01:38:01Z) - On Differentially Private Federated Linear Contextual Bandits [9.51828574518325]
We consider cross-silo federated linear contextual bandit (LCB) problem under differential privacy.
We identify three issues in the state-of-the-art: (i) failure of claimed privacy protection and (ii) incorrect regret bound due to noise miscalculation.
We show that our algorithm can achieve nearly optimal'' regret without a trusted server.
arXiv Detail & Related papers (2023-02-27T16:47:49Z) - Privacy-Preserving Joint Edge Association and Power Optimization for the
Internet of Vehicles via Federated Multi-Agent Reinforcement Learning [74.53077322713548]
We investigate the privacy-preserving joint edge association and power allocation problem.
The proposed solution strikes a compelling trade-off, while preserving a higher privacy level than the state-of-the-art solutions.
arXiv Detail & Related papers (2023-01-26T10:09:23Z) - Social-Aware Clustered Federated Learning with Customized Privacy Preservation [38.00035804720786]
We propose a novel Social-aware Clustered Federated Learning scheme, where mutually trusted individuals can freely form a social cluster.
By mixing model updates in a social group, adversaries can only eavesdrop the social-layer combined results, but not the privacy of individuals.
Experiments on Facebook network and MNIST/CIFAR-10 datasets validate that our SCFL can effectively enhance learning utility, improve user payoff, and enforce customizable privacy protection.
arXiv Detail & Related papers (2022-12-25T10:16:36Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Towards Differential Relational Privacy and its use in Question
Answering [109.4452196071872]
Memorization of relation between entities in a dataset can lead to privacy issues when using a trained question answering model.
We quantify this phenomenon and provide a possible definition of Differential Privacy (DPRP)
We illustrate concepts in experiments with largescale models for Question Answering.
arXiv Detail & Related papers (2022-03-30T22:59:24Z) - Privatized Graph Federated Learning [57.14673504239551]
We introduce graph federated learning, which consists of multiple units connected by a graph.
We show how graph homomorphic perturbations can be used to ensure the algorithm is differentially private.
arXiv Detail & Related papers (2022-03-14T13:48:23Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Privacy-Preserving Communication-Efficient Federated Multi-Armed Bandits [17.039484057126337]
Communication bottleneck and data privacy are two critical concerns in federated multi-armed bandit (MAB) problems.
We design the privacy-preserving communication-efficient algorithm in such problems and study the interactions among privacy, communication and learning performance in terms of the regret.
arXiv Detail & Related papers (2021-11-02T12:56:12Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Federated $f$-Differential Privacy [19.499120576896228]
Federated learning (FL) is a training paradigm where the clients collaboratively learn models by repeatedly sharing information.
We introduce federated $f$-differential privacy, a new notion specifically tailored to the federated setting.
We then propose a generic private federated learning framework PriFedSync that accommodates a large family of state-of-the-art FL algorithms.
arXiv Detail & Related papers (2021-02-22T16:28:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.