mPSAuth: Privacy-Preserving and Scalable Authentication for Mobile Web
Applications
- URL: http://arxiv.org/abs/2210.04777v1
- Date: Fri, 7 Oct 2022 12:49:34 GMT
- Title: mPSAuth: Privacy-Preserving and Scalable Authentication for Mobile Web
Applications
- Authors: David Monschein and Oliver P. Waldhorst
- Abstract summary: mPSAuth is an approach for continuously tracking various data sources reflecting user behavior and estimating the likelihood of the current user being legitimate.
We show that mPSAuth can provide high accuracy with low encryption and communication overhead, while the effort for the inference is increased to a tolerable extent.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As nowadays most web application requests originate from mobile devices,
authentication of mobile users is essential in terms of security
considerations. To this end, recent approaches rely on machine learning
techniques to analyze various aspects of user behavior as a basis for
authentication decisions. These approaches face two challenges: first,
examining behavioral data raises significant privacy concerns, and second,
approaches must scale to support a large number of users. Existing approaches
do not address these challenges sufficiently. We propose mPSAuth, an approach
for continuously tracking various data sources reflecting user behavior (e.g.,
touchscreen interactions, sensor data) and estimating the likelihood of the
current user being legitimate based on machine learning techniques. With
mPSAuth, both the authentication protocol and the machine learning models
operate on homomorphically encrypted data to ensure the users' privacy.
Furthermore, the number of machine learning models used by mPSAuth is
independent of the number of users, thus providing adequate scalability. In an
extensive evaluation based on real-world data from a mobile application, we
illustrate that mPSAuth can provide high accuracy with low encryption and
communication overhead, while the effort for the inference is increased to a
tolerable extent.
Related papers
- PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Leveraging Machine Learning for Wi-Fi-based Environmental Continuous Two-Factor Authentication [0.44998333629984877]
We present a novel 2FA approach replacing the user's input with decisions made by Machine Learning (ML)
Our system exploits unique environmental features associated with the user, such as beacon frame characteristics and Received Signal Strength Indicator ( RSSI) values from Wi-Fi Access Points (APs)
For enhanced security, our system mandates that the user's two devices (i.e., a login device and a mobile device) be situated within a predetermined proximity before granting access.
arXiv Detail & Related papers (2024-01-12T14:58:15Z) - Finding Vulnerabilities in Mobile Application APIs: A Modular Programmatic Approach [0.0]
Application Programming Interfaces (APIs) are becoming increasingly popular to transfer data in a variety of mobile applications.
These APIs often process sensitive user information through their endpoints, which are potentially exploitable due to developer mis implementation.
This paper created a custom, modular endpoint vulnerability detection tool to analyze information leakage in various mobile Android applications.
arXiv Detail & Related papers (2023-10-22T00:08:51Z) - Conditional Generative Adversarial Network for keystroke presentation
attack [0.0]
We propose to study a new approach aiming to deploy a presentation attack towards a keystroke authentication system.
Our idea is to use Conditional Generative Adversarial Networks (cGAN) for generating synthetic keystroke data that can be used for impersonating an authorized user.
Results indicate that the cGAN can effectively generate keystroke dynamics patterns that can be used for deceiving keystroke authentication systems.
arXiv Detail & Related papers (2022-12-16T12:45:16Z) - Warmup and Transfer Knowledge-Based Federated Learning Approach for IoT
Continuous Authentication [34.6454670154373]
We propose a novel Federated Learning (FL) approach that protects the anonymity of user data and maintains the security of his data.
Our experiments show a significant increase in user authentication accuracy while maintaining user privacy and data security.
arXiv Detail & Related papers (2022-11-10T15:51:04Z) - Machine and Deep Learning Applications to Mouse Dynamics for Continuous
User Authentication [0.0]
This article builds upon our previous published work by evaluating our dataset of 40 users using three machine learning and deep learning algorithms.
The top performer is a 1-dimensional convolutional neural network with a peak average test accuracy of 85.73% across the top 10 users.
Multi class classification is also examined using an artificial neural network which reaches an astounding peak accuracy of 92.48%.
arXiv Detail & Related papers (2022-05-26T21:43:59Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Federated Learning-based Active Authentication on Mobile Devices [98.23904302910022]
User active authentication on mobile devices aims to learn a model that can correctly recognize the enrolled user based on device sensor information.
We propose a novel user active authentication training, termed as Federated Active Authentication (FAA)
We show that existing FL/SL methods are suboptimal for FAA as they rely on the data to be distributed homogeneously.
arXiv Detail & Related papers (2021-04-14T22:59:08Z) - Federated Learning of User Authentication Models [69.93965074814292]
We propose Federated User Authentication (FedUA), a framework for privacy-preserving training of machine learning models.
FedUA adopts federated learning framework to enable a group of users to jointly train a model without sharing the raw inputs.
We show our method is privacy-preserving, scalable with number of users, and allows new users to be added to training without changing the output layer.
arXiv Detail & Related papers (2020-07-09T08:04:38Z) - Unsupervised Model Personalization while Preserving Privacy and
Scalability: An Open Problem [55.21502268698577]
This work investigates the task of unsupervised model personalization, adapted to continually evolving, unlabeled local user images.
We provide a novel Dual User-Adaptation framework (DUA) to explore the problem.
This framework flexibly disentangles user-adaptation into model personalization on the server and local data regularization on the user device.
arXiv Detail & Related papers (2020-03-30T09:35:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.