Federated Learning-based Active Authentication on Mobile Devices
- URL: http://arxiv.org/abs/2104.07158v1
- Date: Wed, 14 Apr 2021 22:59:08 GMT
- Title: Federated Learning-based Active Authentication on Mobile Devices
- Authors: Poojan Oza, Vishal M. Patel
- Abstract summary: User active authentication on mobile devices aims to learn a model that can correctly recognize the enrolled user based on device sensor information.
We propose a novel user active authentication training, termed as Federated Active Authentication (FAA)
We show that existing FL/SL methods are suboptimal for FAA as they rely on the data to be distributed homogeneously.
- Score: 98.23904302910022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User active authentication on mobile devices aims to learn a model that can
correctly recognize the enrolled user based on device sensor information. Due
to lack of negative class data, it is often modeled as a one-class
classification problem. In practice, mobile devices are connected to a central
server, e.g, all android-based devices are connected to Google server through
internet. This device-server structure can be exploited by recently proposed
Federated Learning (FL) and Split Learning (SL) frameworks to perform
collaborative learning over the data distributed among multiple devices. Using
FL/SL frameworks, we can alleviate the lack of negative data problem by
training a user authentication model over multiple user data distributed across
devices. To this end, we propose a novel user active authentication training,
termed as Federated Active Authentication (FAA), that utilizes the principles
of FL/SL. We first show that existing FL/SL methods are suboptimal for FAA as
they rely on the data to be distributed homogeneously (i.e. IID) across
devices, which is not true in the case of FAA. Subsequently, we propose a novel
method that is able to tackle heterogeneous/non-IID distribution of data in
FAA. Specifically, we first extract feature statistics such as mean and
variance corresponding to data from each user which are later combined in a
central server to learn a multi-class classifier and sent back to the
individual devices. We conduct extensive experiments using three active
authentication benchmark datasets (MOBIO, UMDAA-01, UMDAA-02) and show that
such approach performs better than state-of-the-art one-class based FAA methods
and is also able to outperform traditional FL/SL methods.
Related papers
- Partial Federated Learning [26.357723187375665]
Federated Learning (FL) is a popular algorithm to train machine learning models on user data constrained to edge devices.
We propose a new algorithm called Partial Federated Learning (PartialFL), where a machine learning model is trained using data where a subset of data modalities can be made available to the server.
arXiv Detail & Related papers (2024-03-03T21:04:36Z) - Unsupervised anomalies detection in IIoT edge devices networks using
federated learning [0.0]
Federated learning(FL) as a distributed machine learning approach performs training of a machine learning model on the device that gathered the data itself.
In this paper, we leverage the benefits of FL and implemented Fedavg algorithm on a recent dataset that represent the modern IoT/ IIoT device networks.
We also evaluated some shortcomings of Fedavg such as unfairness that happens during the training when struggling devices do not participate for every stage of training.
arXiv Detail & Related papers (2023-08-23T14:53:38Z) - Semi-Supervised Federated Learning for Keyword Spotting [15.044022869136262]
Keywords Spotting (KWS) is a critical aspect of audio-based applications on mobile devices and virtual assistants.
Recent developments in Federated Learning (FL) have significantly expanded the ability to train machine learning models.
arXiv Detail & Related papers (2023-05-09T00:46:12Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Federated Split GANs [12.007429155505767]
We propose an alternative approach to train ML models in user's devices themselves.
We focus on GANs (generative adversarial networks) and leverage their inherent privacy-preserving attribute.
Our system preserves data privacy, keeps a short training time, and yields same accuracy of model training in unconstrained devices.
arXiv Detail & Related papers (2022-07-04T23:53:47Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - FLaPS: Federated Learning and Privately Scaling [3.618133010429131]
Federated learning (FL) is a distributed learning process where the model is transferred to the devices that posses data.
We present Federated Learning and Privately Scaling (FLaPS) architecture, which improves scalability as well as the security and privacy of the system.
arXiv Detail & Related papers (2020-09-13T14:20:17Z) - Federated Learning of User Authentication Models [69.93965074814292]
We propose Federated User Authentication (FedUA), a framework for privacy-preserving training of machine learning models.
FedUA adopts federated learning framework to enable a group of users to jointly train a model without sharing the raw inputs.
We show our method is privacy-preserving, scalable with number of users, and allows new users to be added to training without changing the output layer.
arXiv Detail & Related papers (2020-07-09T08:04:38Z) - Federated and continual learning for classification tasks in a society
of devices [59.45414406974091]
Light Federated and Continual Consensus (LFedCon2) is a new federated and continual architecture that uses light, traditional learners.
Our method allows powerless devices (such as smartphones or robots) to learn in real time, locally, continuously, autonomously and from users.
In order to test our proposal, we have applied it in a heterogeneous community of smartphone users to solve the problem of walking recognition.
arXiv Detail & Related papers (2020-06-12T12:37:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.