Unsupervised Model Personalization while Preserving Privacy and
Scalability: An Open Problem
- URL: http://arxiv.org/abs/2003.13296v1
- Date: Mon, 30 Mar 2020 09:35:12 GMT
- Title: Unsupervised Model Personalization while Preserving Privacy and
Scalability: An Open Problem
- Authors: Matthias De Lange, Xu Jia, Sarah Parisot, Ales Leonardis, Gregory
Slabaugh, Tinne Tuytelaars
- Abstract summary: This work investigates the task of unsupervised model personalization, adapted to continually evolving, unlabeled local user images.
We provide a novel Dual User-Adaptation framework (DUA) to explore the problem.
This framework flexibly disentangles user-adaptation into model personalization on the server and local data regularization on the user device.
- Score: 55.21502268698577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work investigates the task of unsupervised model personalization,
adapted to continually evolving, unlabeled local user images. We consider the
practical scenario where a high capacity server interacts with a myriad of
resource-limited edge devices, imposing strong requirements on scalability and
local data privacy. We aim to address this challenge within the continual
learning paradigm and provide a novel Dual User-Adaptation framework (DUA) to
explore the problem. This framework flexibly disentangles user-adaptation into
model personalization on the server and local data regularization on the user
device, with desirable properties regarding scalability and privacy
constraints. First, on the server, we introduce incremental learning of
task-specific expert models, subsequently aggregated using a concealed
unsupervised user prior. Aggregation avoids retraining, whereas the user prior
conceals sensitive raw user data, and grants unsupervised adaptation. Second,
local user-adaptation incorporates a domain adaptation point of view, adapting
regularizing batch normalization parameters to the user data. We explore
various empirical user configurations with different priors in categories and a
tenfold of transforms for MIT Indoor Scene recognition, and classify numbers in
a combined MNIST and SVHN setup. Extensive experiments yield promising results
for data-driven local adaptation and elicit user priors for server adaptation
to depend on the model rather than user data. Hence, although user-adaptation
remains a challenging open problem, the DUA framework formalizes a principled
foundation for personalizing both on server and user device, while maintaining
privacy and scalability.
Related papers
- Personalized Federated Collaborative Filtering: A Variational AutoEncoder Approach [49.63614966954833]
Federated Collaborative Filtering (FedCF) is an emerging field focused on developing a new recommendation framework with preserving privacy.
This paper proposes a novel personalized FedCF method by preserving users' personalized information into a latent variable and a neural model simultaneously.
To effectively train the proposed framework, we model the problem as a specialized Variational AutoEncoder (VAE) task by integrating user interaction vector reconstruction with missing value prediction.
arXiv Detail & Related papers (2024-08-16T05:49:14Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Federated Adaptation for Foundation Model-based Recommendations [29.86114788739202]
We propose a novel adaptation mechanism to enhance the foundation model-based recommendation system in a privacy-preserving manner.
User's private behavioral data remains secure as it is not shared with the server.
Experimental results on four benchmark datasets demonstrate our method's superior performance.
arXiv Detail & Related papers (2024-05-08T06:27:07Z) - Countering Mainstream Bias via End-to-End Adaptive Local Learning [17.810760161534247]
Collaborative filtering (CF) based recommendations suffer from mainstream bias.
We propose a novel end-To-end Adaptive Local Learning framework to provide high-quality recommendations to both mainstream and niche users.
arXiv Detail & Related papers (2024-04-13T03:17:33Z) - Federated Prompt-based Decision Transformer for Customized VR Services
in Mobile Edge Computing System [9.269074750399657]
We first introduce a quality of experience (QoE) metric to measure user experience.
Then, a QoE problem is formulated for resource allocation to ensure the highest possible user experience.
We propose a framework that employs federated learning (FL) and prompt-based sequence modeling to pre-train a common model.
arXiv Detail & Related papers (2024-02-15T05:56:35Z) - Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models [62.838689691468666]
We propose Federated Black-Box Prompt Tuning (Fed-BBPT) to optimally harness each local dataset.
Fed-BBPT capitalizes on a central server that aids local users in collaboratively training a prompt generator through regular aggregation.
Relative to extensive fine-tuning, Fed-BBPT proficiently sidesteps memory challenges tied to PTM storage and fine-tuning on local machines.
arXiv Detail & Related papers (2023-10-04T19:30:49Z) - Federated Privacy-preserving Collaborative Filtering for On-Device Next
App Prediction [52.16923290335873]
We propose a novel SeqMF model to solve the problem of predicting the next app launch during mobile device usage.
We modify the structure of the classical matrix factorization model and update the training procedure to sequential learning.
One more ingredient of the proposed approach is a new privacy mechanism that guarantees the protection of the sent data from the users to the remote server.
arXiv Detail & Related papers (2023-02-05T10:29:57Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Federated Intrusion Detection for IoT with Heterogeneous Cohort Privacy [0.0]
Internet of Things (IoT) devices are becoming increasingly popular and are influencing many application domains such as healthcare and transportation.
In this work, we look at differentially private (DP) neural network (NN) based network intrusion detection systems (NIDS) to detect intrusion attacks on networks of such IoT devices.
Existing NN training solutions in this domain either ignore privacy considerations or assume that the privacy requirements are homogeneous across all users.
We show that the performance of existing differentially private methods degrade for clients with non-identical data distributions when clients' privacy requirements are heterogeneous.
arXiv Detail & Related papers (2021-01-25T03:33:27Z) - Prioritized Multi-Criteria Federated Learning [16.35440946424973]
In Machine Learning scenarios, privacy is a crucial concern when models have to be trained with private data coming from users of a service.
We propose Federated Learning (FL) as a means to build ML models based on private datasets distributed over a large number of clients.
A central coordinating server receives locally computed updates by clients and aggregate them to obtain a better global model.
arXiv Detail & Related papers (2020-07-17T10:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.