Estimation of Individual Device Contributions for Incentivizing
Federated Learning
- URL: http://arxiv.org/abs/2009.09371v1
- Date: Sun, 20 Sep 2020 07:03:27 GMT
- Title: Estimation of Individual Device Contributions for Incentivizing
Federated Learning
- Authors: Takayuki Nishio, Ryoichi Shinkuma, Narayan B. Mandayam
- Abstract summary: Federated learning (FL) is an emerging technique used to train a machine-learning model collaboratively using the data and computation resource of mobile devices.
This paper proposes a computation-and communication-efficient method of estimating a participating device's contribution level.
- Score: 8.426678774799859
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an emerging technique used to train a
machine-learning model collaboratively using the data and computation resource
of the mobile devices without exposing privacy-sensitive user data.
Appropriate incentive mechanisms that motivate the data and mobile-device
owner to participate in FL is key to building a sustainable platform for FL.
However, it is difficult to evaluate the contribution level of the
devices/owners to determine appropriate rewards without large computation and
communication overhead.
This paper proposes a computation-and communication-efficient method of
estimating a participating device's contribution level. The proposed method
enables such estimation during a single FL training process, there by reducing
the need for traffic and computation overhead. The performance evaluations
using the MNIST dataset show that the proposed method estimates individual
participants' contributions accurately with 46-49% less computation overhead
and no communication overhead than a naive estimation method.
Related papers
- Data Valuation and Detections in Federated Learning [4.899818550820576]
Federated Learning (FL) enables collaborative model training while preserving the privacy of raw data.
A challenge in this framework is the fair and efficient valuation of data, which is crucial for incentivizing clients to contribute high-quality data in the FL task.
This paper introduces a novel privacy-preserving method for evaluating client contributions and selecting relevant datasets without a pre-specified training algorithm in an FL task.
arXiv Detail & Related papers (2023-11-09T12:01:32Z) - Filling the Missing: Exploring Generative AI for Enhanced Federated
Learning over Heterogeneous Mobile Edge Devices [72.61177465035031]
We propose a generative AI-empowered federated learning to address these challenges by leveraging the idea of FIlling the MIssing (FIMI) portion of local data.
Experiment results demonstrate that FIMI can save up to 50% of the device-side energy to achieve the target global test accuracy.
arXiv Detail & Related papers (2023-10-21T12:07:04Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - FLINT: A Platform for Federated Learning Integration [3.0895105898120447]
Moving from centralized training to cross-device FL for millions or billions of devices presents many risks.
corresponding infrastructure, development costs, and return on investment are difficult to estimate.
We present a device-cloud collaborative FL platform that integrates with an existing machine learning platform.
arXiv Detail & Related papers (2023-02-24T19:38:03Z) - Data Valuation for Vertical Federated Learning: A Model-free and
Privacy-preserving Method [14.451118953357605]
FedValue is a privacy-preserving, task-specific but model-free data valuation method for Vertical Federated learning (VFL)
We first introduce a novel data valuation metric, namely MShapley-CMI. The metric evaluates a data party's contribution to a predictive analytics task without the need of executing a machine learning model.
Next, we develop an innovative federated method that calculates the MShapley-CMI value for each data party in a privacy-preserving manner.
arXiv Detail & Related papers (2021-12-15T02:42:28Z) - Data-Free Evaluation of User Contributions in Federated Learning [31.181141140071592]
Federated learning (FL) trains a machine learning model on mobile devices in a distributed manner using each device's private data and computing resources.
We propose a method called Pairwise Correlated Agreement (PCA) based on the idea of peer prediction to evaluate user contribution in FL without a test dataset.
We then apply PCA to designing (1) a new federated learning algorithm called Fed-PCA, and (2) a new incentive mechanism that guarantees truthfulness.
arXiv Detail & Related papers (2021-08-24T10:17:03Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - An Incentive Mechanism for Federated Learning in Wireless Cellular
network: An Auction Approach [75.08185720590748]
Federated Learning (FL) is a distributed learning framework that can deal with the distributed issue in machine learning.
In this paper, we consider a FL system that involves one base station (BS) and multiple mobile users.
We formulate the incentive mechanism between the BS and mobile users as an auction game where the BS is an auctioneer and the mobile users are the sellers.
arXiv Detail & Related papers (2020-09-22T01:50:39Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.