Faithful Edge Federated Learning: Scalability and Privacy
- URL: http://arxiv.org/abs/2106.15905v1
- Date: Wed, 30 Jun 2021 08:46:40 GMT
- Title: Faithful Edge Federated Learning: Scalability and Privacy
- Authors: Meng Zhang, Ermin Wei, and Randall Berry
- Abstract summary: Federated learning enables machine learning algorithms to be trained over a network of decentralized edge devices without requiring the exchange of local datasets.
We analyze how the key feature of federated learning, unbalanced and non-i.i.d. data, affects agents' incentives to voluntarily participate.
We design two faithful federated learning mechanisms which satisfy economic properties, scalability, and privacy.
- Score: 4.8534377897519105
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning enables machine learning algorithms to be trained over a
network of multiple decentralized edge devices without requiring the exchange
of local datasets. Successfully deploying federated learning requires ensuring
that agents (e.g., mobile devices) faithfully execute the intended algorithm,
which has been largely overlooked in the literature. In this study, we first
use risk bounds to analyze how the key feature of federated learning,
unbalanced and non-i.i.d. data, affects agents' incentives to voluntarily
participate and obediently follow traditional federated learning algorithms.
To be more specific, our analysis reveals that agents with less typical data
distributions and relatively more samples are more likely to opt out of or
tamper with federated learning algorithms. To this end, we formulate the first
faithful implementation problem of federated learning and design two faithful
federated learning mechanisms which satisfy economic properties, scalability,
and privacy. Further, the time complexity of computing all agents' payments in
the number of agents is $\mathcal{O}(1)$. First, we design a Faithful Federated
Learning (FFL) mechanism which approximates the Vickrey-Clarke-Groves (VCG)
payments via an incremental computation. We show that it achieves (probably
approximate) optimality, faithful implementation, voluntary participation, and
some other economic properties (such as budget balance). Second, by
partitioning agents into several subsets, we present a scalable VCG mechanism
approximation. We further design a scalable and Differentially Private FFL
(DP-FFL) mechanism, the first differentially private faithful mechanism, that
maintains the economic properties. Our mechanism enables one to make three-way
performance tradeoffs among privacy, the iterations needed, and payment
accuracy loss.
Related papers
- LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks.
By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections.
Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - DPBalance: Efficient and Fair Privacy Budget Scheduling for Federated
Learning as a Service [15.94482624965024]
Federated learning (FL) has emerged as a prevalent distributed machine learning scheme.
We propose DPBalance, a novel privacy budget scheduling mechanism that jointly optimize both efficiency and fairness.
We show that DPBalance achieves an average efficiency improvement of $1.44times sim 3.49 times$, and an average fairness improvement of $1.37times sim 24.32 times$.
arXiv Detail & Related papers (2024-02-15T05:19:53Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Evaluation and comparison of federated learning algorithms for Human
Activity Recognition on smartphones [0.5039813366558306]
Federated Learning (FL) has been introduced as a new machine learning paradigm enhancing the use of local devices.
In this paper, we propose a new FL algorithm, termed FedDist, which can modify models during training by identifying dissimilarities between neurons among the clients.
Results have shown the ability of FedDist to adapt to heterogeneous data and the capability of FL to deal with asynchronous situations.
arXiv Detail & Related papers (2022-10-30T18:47:23Z) - Multi-Agent Reinforcement Learning for Long-Term Network Resource
Allocation through Auction: a V2X Application [7.326507804995567]
We formulate offloading of computational tasks from a dynamic group of mobile agents (e.g., cars) as decentralized decision making among autonomous agents.
We design an interaction mechanism that incentivizes such agents to align private and system goals by balancing between competition and cooperation.
We propose a novel multi-agent online learning algorithm that learns with partial, delayed and noisy state information.
arXiv Detail & Related papers (2022-07-29T10:29:06Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline
Reinforcement Learning [114.36124979578896]
We design a dynamic mechanism using offline reinforcement learning algorithms.
Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline data set.
arXiv Detail & Related papers (2022-05-05T05:44:26Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Reward-Based 1-bit Compressed Federated Distillation on Blockchain [14.365210947456209]
Recent advent of various forms of Federated Knowledge Distillation (FD) paves the way for a new generation of robust and communication-efficient Federated Learning (FL)
We introduce a novel decentralized federated learning framework where heavily compressed 1-bit soft-labels are aggregated on a smart contract.
In a context where workers' contributions are now easily comparable, we modify the Peer Truth Serum for Crowdsourcing mechanism (PTSC) for FD to reward honest participation.
arXiv Detail & Related papers (2021-06-27T15:51:04Z) - Online Learning of Competitive Equilibria in Exchange Economies [94.24357018178867]
In economics, the sharing of scarce resources among multiple rational agents is a classical problem.
We propose an online learning mechanism to learn agent preferences.
We demonstrate the effectiveness of this mechanism through numerical simulations.
arXiv Detail & Related papers (2021-06-11T21:32:17Z) - Additively Homomorphical Encryption based Deep Neural Network for
Asymmetrically Collaborative Machine Learning [12.689643742151516]
preserving machine learning creates a constraint which limits further applications in finance sectors.
We propose a new practical scheme of collaborative machine learning that one party owns data, but another party owns labels only.
Our experiments on different datasets demonstrate not only stable training without accuracy, but also more than 100 times speedup.
arXiv Detail & Related papers (2020-07-14T06:43:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.