Opportunistic Federated Learning: An Exploration of Egocentric
Collaboration for Pervasive Computing Applications
- URL: http://arxiv.org/abs/2103.13266v1
- Date: Wed, 24 Mar 2021 15:30:21 GMT
- Title: Opportunistic Federated Learning: An Exploration of Egocentric
Collaboration for Pervasive Computing Applications
- Authors: Sangsu Lee, Xi Zheng, Jie Hua, Haris Vikalo, Christine Julien
- Abstract summary: We define a new approach, opportunistic federated learning, in which individual devices belonging to different users seek to learn robust models.
In this paper, we explore the feasibility and limits of such an approach, culminating in a framework that supports encounter-based pairwise collaborative learning.
- Score: 20.61034787249924
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pervasive computing applications commonly involve user's personal smartphones
collecting data to influence application behavior. Applications are often
backed by models that learn from the user's experiences to provide personalized
and responsive behavior. While models are often pre-trained on massive
datasets, federated learning has gained attention for its ability to train
globally shared models on users' private data without requiring the users to
share their data directly. However, federated learning requires devices to
collaborate via a central server, under the assumption that all users desire to
learn the same model. We define a new approach, opportunistic federated
learning, in which individual devices belonging to different users seek to
learn robust models that are personalized to their user's own experiences.
However, instead of learning in isolation, these models opportunistically
incorporate the learned experiences of other devices they encounter
opportunistically. In this paper, we explore the feasibility and limits of such
an approach, culminating in a framework that supports encounter-based pairwise
collaborative learning. The use of our opportunistic encounter-based learning
amplifies the performance of personalized learning while resisting overfitting
to encountered data.
Related papers
- Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Applied Federated Learning: Architectural Design for Robust and
Efficient Learning in Privacy Aware Settings [0.8454446648908585]
The classical machine learning paradigm requires the aggregation of user data in a central location.
Centralization of data poses risks, including a heightened risk of internal and external security incidents.
Federated learning with differential privacy is designed to avoid the server-side centralization pitfall.
arXiv Detail & Related papers (2022-06-02T00:30:04Z) - Sparsity-aware neural user behavior modeling in online interaction
platforms [2.4036844268502766]
We develop generalizable neural representation learning frameworks for user behavior modeling.
Our problem settings span transductive and inductive learning scenarios.
We leverage different facets of information reflecting user behavior to enable personalized inference at scale.
arXiv Detail & Related papers (2022-02-28T00:27:11Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Privacy-Preserving Learning of Human Activity Predictors in Smart
Environments [5.981641988736108]
We use state-of-the-art deep neural network-based techniques to learn predictive human activity models.
A novel aspect of our work is that we carefully track the temporal evolution of the data available to the learner and the data shared by the user.
arXiv Detail & Related papers (2021-01-17T01:04:53Z) - Federated and continual learning for classification tasks in a society
of devices [59.45414406974091]
Light Federated and Continual Consensus (LFedCon2) is a new federated and continual architecture that uses light, traditional learners.
Our method allows powerless devices (such as smartphones or robots) to learn in real time, locally, continuously, autonomously and from users.
In order to test our proposal, we have applied it in a heterogeneous community of smartphone users to solve the problem of walking recognition.
arXiv Detail & Related papers (2020-06-12T12:37:03Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z) - Survey of Personalization Techniques for Federated Learning [0.08594140167290096]
Federated learning enables machine learning models to learn from private decentralized data without compromising privacy.
This paper highlights the need for personalization and surveys recent research on this topic.
arXiv Detail & Related papers (2020-03-19T10:47:55Z) - Three Approaches for Personalization with Applications to Federated
Learning [68.19709953755238]
We present a systematic learning-theoretic study of personalization.
We provide learning-theoretic guarantees and efficient algorithms for which we also demonstrate the performance.
All of our algorithms are model-agnostic and work for any hypothesis class.
arXiv Detail & Related papers (2020-02-25T01:36:43Z) - Personalized Federated Learning: A Meta-Learning Approach [28.281166755509886]
In Federated Learning, we aim to train models across multiple computing units (users)
In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data.
arXiv Detail & Related papers (2020-02-19T01:08:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.