Prioritized Multi-Criteria Federated Learning
- URL: http://arxiv.org/abs/2007.08893v1
- Date: Fri, 17 Jul 2020 10:49:47 GMT
- Title: Prioritized Multi-Criteria Federated Learning
- Authors: Vito Walter Anelli, Yashar Deldjoo, Tommaso Di Noia, Antonio Ferrara
- Abstract summary: In Machine Learning scenarios, privacy is a crucial concern when models have to be trained with private data coming from users of a service.
We propose Federated Learning (FL) as a means to build ML models based on private datasets distributed over a large number of clients.
A central coordinating server receives locally computed updates by clients and aggregate them to obtain a better global model.
- Score: 16.35440946424973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Machine Learning scenarios, privacy is a crucial concern when models have
to be trained with private data coming from users of a service, such as a
recommender system, a location-based mobile service, a mobile phone text
messaging service providing next word prediction, or a face image
classification system. The main issue is that, often, data are collected,
transferred, and processed by third parties. These transactions violate new
regulations, such as GDPR. Furthermore, users usually are not willing to share
private data such as their visited locations, the text messages they wrote, or
the photo they took with a third party. On the other hand, users appreciate
services that work based on their behaviors and preferences. In order to
address these issues, Federated Learning (FL) has been recently proposed as a
means to build ML models based on private datasets distributed over a large
number of clients, while preventing data leakage. A federation of users is
asked to train a same global model on their private data, while a central
coordinating server receives locally computed updates by clients and aggregate
them to obtain a better global model, without the need to use clients' actual
data. In this work, we extend the FL approach by pushing forward the
state-of-the-art approaches in the aggregation step of FL, which we deem
crucial for building a high-quality global model. Specifically, we propose an
approach that takes into account a suite of client-specific criteria that
constitute the basis for assigning a score to each client based on a priority
of criteria defined by the service provider. Extensive experiments on two
publicly available datasets indicate the merits of the proposed approach
compared to standard FL baseline.
Related papers
- Personalized federated learning based on feature fusion [2.943623084019036]
Federated learning enables distributed clients to collaborate on training while storing their data locally to protect client privacy.
We propose a personalized federated learning approach called pFedPM.
In our process, we replace traditional gradient uploading with feature uploading, which helps reduce communication costs and allows for heterogeneous client models.
arXiv Detail & Related papers (2024-06-24T12:16:51Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes [49.22075916259368]
In some real-world applications, data samples are usually distributed on local devices.
In this paper, we focus on a special kind of Non-I.I.D. scene where clients own incomplete classes.
Our proposed algorithm named MAP could simultaneously achieve the aggregation and personalization goals in FL.
arXiv Detail & Related papers (2024-04-14T12:22:42Z) - FedGeo: Privacy-Preserving User Next Location Prediction with Federated
Learning [27.163370946895697]
A User Next Location Prediction (UNLP) task, which predicts the next location that a user will move to given his/her trajectory, is an indispensable task for a wide range of applications.
Previous studies using large-scale trajectory datasets in a single server have achieved remarkable performance in UNLP task.
In real-world applications, legal and ethical issues have been raised regarding privacy concerns leading to restrictions against sharing human trajectory datasets to any other server.
arXiv Detail & Related papers (2023-12-06T01:43:58Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - Personalized Privacy-Preserving Framework for Cross-Silo Federated
Learning [0.0]
Federated learning (FL) is a promising decentralized deep learning (DL) framework that enables DL-based approaches trained collaboratively across clients without sharing private data.
In this paper, we propose a novel framework, namely Personalized Privacy-Preserving Federated Learning (PPPFL)
Our proposed framework outperforms multiple FL baselines on different datasets, including MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2023-02-22T07:24:08Z) - Privately Customizing Prefinetuning to Better Match User Data in
Federated Learning [3.645000701985685]
In Federated Learning (FL), accessing private client data incurs communication and privacy costs.
We propose FreD (Federated Private Fr'echet Distance) -- a privately computed distance between a prefinetuning dataset and federated datasets.
We show empirically that FreD accurately predicts the best prefinetuning dataset at minimal privacy cost.
arXiv Detail & Related papers (2023-02-17T18:18:22Z) - Federated Learning in Non-IID Settings Aided by Differentially Private
Synthetic Data [20.757477553095637]
Federated learning (FL) is a privacy-promoting framework that enables clients to collaboratively train machine learning models.
A major challenge in federated learning arises when the local data is heterogeneous.
We propose FedDPMS, an FL algorithm in which clients deploy variational auto-encoders to augment local datasets with data synthesized using differentially private means of latent data representations.
arXiv Detail & Related papers (2022-06-01T18:00:48Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z) - Decentralised Learning from Independent Multi-Domain Labels for Person
Re-Identification [69.29602103582782]
Deep learning has been successful for many computer vision tasks due to the availability of shared and centralised large-scale training data.
However, increasing awareness of privacy concerns poses new challenges to deep learning, especially for person re-identification (Re-ID)
We propose a novel paradigm called Federated Person Re-Identification (FedReID) to construct a generalisable global model (a central server) by simultaneously learning with multiple privacy-preserved local models (local clients)
This client-server collaborative learning process is iteratively performed under privacy control, enabling FedReID to realise decentralised learning without sharing distributed data nor collecting any
arXiv Detail & Related papers (2020-06-07T13:32:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.