FL-Market: Trading Private Models in Federated Learning
- URL: http://arxiv.org/abs/2106.04384v4
- Date: Mon, 3 Apr 2023 11:27:19 GMT
- Title: FL-Market: Trading Private Models in Federated Learning
- Authors: Shuyuan Zheng, Yang Cao, Masatoshi Yoshikawa, Huizhong Li, Qiang Yan
- Abstract summary: Existing model marketplaces assume that the broker can access data owners' private training data, which may not be realistic in practice.
We propose FL-Market, a locally private model marketplace that protects privacy not only against model buyers but also against the untrusted broker.
- Score: 19.11812014629085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The difficulty in acquiring a sufficient amount of training data is a major
bottleneck for machine learning (ML) based data analytics. Recently,
commoditizing ML models has been proposed as an economical and moderate
solution to ML-oriented data acquisition. However, existing model marketplaces
assume that the broker can access data owners' private training data, which may
not be realistic in practice. In this paper, to promote trustworthy data
acquisition for ML tasks, we propose FL-Market, a locally private model
marketplace that protects privacy not only against model buyers but also
against the untrusted broker. FL-Market decouples ML from the need to centrally
gather training data on the broker's side using federated learning, an emerging
privacy-preserving ML paradigm in which data owners collaboratively train an ML
model by uploading local gradients (to be aggregated into a global gradient for
model updating). Then, FL-Market enables data owners to locally perturb their
gradients by local differential privacy and thus further prevents privacy
risks. To drive FL-Market, we propose a deep learning-empowered auction
mechanism for intelligently deciding the local gradients' perturbation levels
and an optimal aggregation mechanism for aggregating the perturbed gradients.
Our auction and aggregation mechanisms can jointly maximize the global
gradient's accuracy, which optimizes model buyers' utility. Our experiments
verify the effectiveness of the proposed mechanisms.
Related papers
- FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - An Auction-based Marketplace for Model Trading in Federated Learning [54.79736037670377]
Federated learning (FL) is increasingly recognized for its efficacy in training models using locally distributed data.
We frame FL as a marketplace of models, where clients act as both buyers and sellers.
We propose an auction-based solution to ensure proper pricing based on performance gain.
arXiv Detail & Related papers (2024-02-02T07:25:53Z) - Federated Unlearning: A Survey on Methods, Design Guidelines, and Evaluation Metrics [2.7456900944642686]
Federated unlearning (FU) algorithms efficiently remove clients' contributions without full model retraining.
This article provides background concepts, empirical evidence, and practical guidelines to design/implement efficient FU schemes.
arXiv Detail & Related papers (2024-01-10T13:26:19Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Balancing Privacy Protection and Interpretability in Federated Learning [8.759803233734624]
Federated learning (FL) aims to collaboratively train the global model in a distributed manner by sharing the model parameters from local clients to a central server.
Recent studies have illustrated that FL still suffers from information leakage as adversaries try to recover the training data by analyzing shared parameters from local clients.
We propose a simple yet effective adaptive differential privacy (ADP) mechanism that selectively adds noisy perturbations to the gradients of client models in FL.
arXiv Detail & Related papers (2023-02-16T02:58:22Z) - Feature Correlation-guided Knowledge Transfer for Federated
Self-supervised Learning [19.505644178449046]
We propose a novel and general method named Federated Self-supervised Learning with Feature-correlation based Aggregation (FedFoA)
Our insight is to utilize feature correlation to align the feature mappings and calibrate the local model updates across clients during their local training process.
We prove that FedFoA is a model-agnostic training framework and can be easily compatible with state-of-the-art unsupervised FL methods.
arXiv Detail & Related papers (2022-11-14T13:59:50Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - A Marketplace for Trading AI Models based on Blockchain and Incentives
for IoT Data [24.847898465750667]
An emerging paradigm in Machine Learning (ML) is a federated approach where the learning model is delivered to a group of heterogeneous agents partially, allowing agents to train the model locally with their own data.
The problem of valuation of models, as well as the questions of incentives for collaborative training and trading of data/models, have received limited treatment in the literature.
In this paper, a new ecosystem of ML model trading over a trusted ML-based network is proposed. The buyer can acquire the model of interest from the ML market, and interested sellers spend local computations on their data to enhance that model's quality
arXiv Detail & Related papers (2021-12-06T08:52:42Z) - Federated Multi-Task Learning under a Mixture of Distributions [10.00087964926414]
Federated Learning (FL) is a framework for on-device collaborative training of machine learning models.
First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client.
We study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions.
arXiv Detail & Related papers (2021-08-23T15:47:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.