Federated Learning of User Verification Models Without Sharing
Embeddings
- URL: http://arxiv.org/abs/2104.08776v1
- Date: Sun, 18 Apr 2021 08:51:39 GMT
- Title: Federated Learning of User Verification Models Without Sharing
Embeddings
- Authors: Hossein Hosseini, Hyunsin Park, Sungrack Yun, Christos Louizos, Joseph
Soriaga, Max Welling
- Abstract summary: Federated User Verification (FedUV) is a framework in which users jointly learn a set of vectors and maximize the correlation of their instance embeddings with a secret linear combination of those vectors.
We show that choosing the linear combinations from the codewords of an error-correcting code allows users to collaboratively train the model without revealing their embedding vectors.
- Score: 73.27015469166166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of training User Verification (UV) models in
federated setting, where each user has access to the data of only one class and
user embeddings cannot be shared with the server or other users. To address
this problem, we propose Federated User Verification (FedUV), a framework in
which users jointly learn a set of vectors and maximize the correlation of
their instance embeddings with a secret linear combination of those vectors. We
show that choosing the linear combinations from the codewords of an
error-correcting code allows users to collaboratively train the model without
revealing their embedding vectors. We present the experimental results for user
verification with voice, face, and handwriting data and show that FedUV is on
par with existing approaches, while not sharing the embeddings with other users
or the server.
Related papers
- Personalized Federated Collaborative Filtering: A Variational AutoEncoder Approach [49.63614966954833]
Federated Collaborative Filtering (FedCF) is an emerging field focused on developing a new recommendation framework with preserving privacy.
This paper proposes a novel personalized FedCF method by preserving users' personalized information into a latent variable and a neural model simultaneously.
To effectively train the proposed framework, we model the problem as a specialized Variational AutoEncoder (VAE) task by integrating user interaction vector reconstruction with missing value prediction.
arXiv Detail & Related papers (2024-08-16T05:49:14Z) - Federated Learning with Only Positive Labels by Exploring Label Correlations [78.59613150221597]
Federated learning aims to collaboratively learn a model by using the data from multiple users under privacy constraints.
In this paper, we study the multi-label classification problem under the federated learning setting.
We propose a novel and generic method termed Federated Averaging by exploring Label Correlations (FedALC)
arXiv Detail & Related papers (2024-04-24T02:22:50Z) - Interactive Text Generation [75.23894005664533]
We introduce a new Interactive Text Generation task that allows training generation models interactively without the costs of involving real users.
We train our interactive models using Imitation Learning, and our experiments against competitive non-interactive generation models show that models trained interactively are superior to their non-interactive counterparts.
arXiv Detail & Related papers (2023-03-02T01:57:17Z) - Intent-aware Multi-source Contrastive Alignment for Tag-enhanced
Recommendation [46.04494053005958]
We seek an alternative framework that is light and effective through self-supervised learning across different sources of information.
We use a self-supervision signal to pair users with the auxiliary information associated with the items they have interacted with before.
We show that our method can achieve better performance while requiring less training time.
arXiv Detail & Related papers (2022-11-11T17:43:19Z) - The Stereotyping Problem in Collaboratively Filtered Recommender Systems [77.56225819389773]
We show that matrix factorization-based collaborative filtering algorithms induce a kind of stereotyping.
If preferences for a textitset of items are anti-correlated in the general user population, then those items may not be recommended together to a user.
We propose an alternative modelling fix, which is designed to capture the diverse multiple interests of each user.
arXiv Detail & Related papers (2021-06-23T18:37:47Z) - Federated Learning of User Authentication Models [69.93965074814292]
We propose Federated User Authentication (FedUA), a framework for privacy-preserving training of machine learning models.
FedUA adopts federated learning framework to enable a group of users to jointly train a model without sharing the raw inputs.
We show our method is privacy-preserving, scalable with number of users, and allows new users to be added to training without changing the output layer.
arXiv Detail & Related papers (2020-07-09T08:04:38Z) - Personalized Federated Learning: A Meta-Learning Approach [28.281166755509886]
In Federated Learning, we aim to train models across multiple computing units (users)
In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data.
arXiv Detail & Related papers (2020-02-19T01:08:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.