Incentives for Federated Learning: a Hypothesis Elicitation Approach
- URL: http://arxiv.org/abs/2007.10596v1
- Date: Tue, 21 Jul 2020 04:55:31 GMT
- Title: Incentives for Federated Learning: a Hypothesis Elicitation Approach
- Authors: Yang Liu and Jiaheng Wei
- Abstract summary: Federated learning provides a promising paradigm for collecting machine learning models from distributed data sources.
The success of a credible federated learning system builds on the assumption that the decentralized and self-interested users will be willing to participate.
This paper introduces solutions to incentivize truthful reporting of a local, user-side machine learning model.
- Score: 10.452709936265274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning provides a promising paradigm for collecting machine
learning models from distributed data sources without compromising users' data
privacy. The success of a credible federated learning system builds on the
assumption that the decentralized and self-interested users will be willing to
participate to contribute their local models in a trustworthy way. However,
without proper incentives, users might simply opt out the contribution cycle,
or will be mis-incentivized to contribute spam/false information. This paper
introduces solutions to incentivize truthful reporting of a local, user-side
machine learning model for federated learning. Our results build on the
literature of information elicitation, but focus on the questions of eliciting
hypothesis (rather than eliciting human predictions). We provide a scoring rule
based framework that incentivizes truthful reporting of local hypotheses at a
Bayesian Nash Equilibrium. We study the market implementation, accuracy as well
as robustness properties of our proposed solution too. We verify the
effectiveness of our methods using MNIST and CIFAR-10 datasets. Particularly we
show that by reporting low-quality hypotheses, users will receive decreasing
scores (rewards, or payments).
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Incentivizing Federated Learning [2.420324724613074]
This paper presents an incentive mechanism that encourages clients to contribute as much data as they can obtain.
Unlike previous incentive mechanisms, our approach does not monetize data.
We theoretically prove that clients will use as much data as they can possibly possess to participate in federated learning under certain conditions.
arXiv Detail & Related papers (2022-05-22T23:02:43Z) - Blockchain-based Trustworthy Federated Learning Architecture [16.062545221270337]
We present a blockchain-based trustworthy federated learning architecture.
We first design a smart contract-based data-model provenance registry to enable accountability.
We also propose a weighted fair data sampler algorithm to enhance fairness in training data.
arXiv Detail & Related papers (2021-08-16T06:13:58Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Learning Models for Actionable Recourse [31.30850378503406]
We propose an algorithm that theoretically guarantees recourse to affected individuals with high probability without sacrificing accuracy.
We demonstrate the efficacy of our approach via extensive experiments on real data.
arXiv Detail & Related papers (2020-11-12T01:15:18Z) - Fairness-aware Agnostic Federated Learning [47.26747955026486]
We develop a fairness-aware agnostic federated learning framework (AgnosticFair) to deal with the challenge of unknown testing distribution.
We use kernel reweighing functions to assign a reweighing value on each training sample in both loss function and fairness constraint.
Built model can be directly applied to local sites as it guarantees fairness on local data distributions.
arXiv Detail & Related papers (2020-10-10T17:58:20Z) - Trading Data For Learning: Incentive Mechanism For On-Device Federated
Learning [25.368195601622688]
Federated Learning rests on the notion of training a global model distributedly on various devices.
Under this setting, users' devices perform computations on their own data and then share the results with the cloud server to update the global model.
The users suffer from privacy leakage of their local data during the federated model training process.
We propose an effective incentive mechanism, which selects users that are most likely to provide reliable data and compensates for their costs of privacy leakage.
arXiv Detail & Related papers (2020-09-11T18:37:58Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.