BEEMA: Braille Adapted Enhanced PIN Entry Mechanism using Arrow keys
- URL: http://arxiv.org/abs/2305.10644v1
- Date: Thu, 18 May 2023 02:03:17 GMT
- Title: BEEMA: Braille Adapted Enhanced PIN Entry Mechanism using Arrow keys
- Authors: Balayogi G and Kuppusamy K S
- Abstract summary: Visually impaired computer users suffer from secrecy and privacy issues on digital platforms.
This paper proposes a mechanism termed BEEMA to help people with visual impairments.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Persons with visual impairments have often been a soft target for
cybercriminals, and they are more susceptible to cyber attacks in the digital
environment. The attacks, as mentioned above, are because they are
visually/aurally exposed to the other sighted users. Visually impaired computer
users suffer from secrecy and privacy issues on digital platforms. This paper
proposes a mechanism termed BEEMA(Braille adapted Enhanced PIN Entry Mechanism
using Arrow keys) to help people with visual impairments. We have studied
various security attacks on visually impaired users and proposed a mechanism
named BEEMA that provides a rigid braille-adapted text input for people with
visual impairments. This mechanism allows users to enter a PIN number on any
website that requires a PIN number. The proposed model is implemented as a
browser plugin which can be accessed easily. We have conducted sessions with
visually impaired users to study the mechanism's performance. The proposed
BEEMA model has shown encouraging results in the user study. Resilience of
BEEMA against various attacks is also explored in this paper.
Related papers
- An Efficient Ensemble Explainable AI (XAI) Approach for Morphed Face
Detection [1.2599533416395763]
We present a novel visual explanation approach named Ensemble XAI to provide a more comprehensive visual explanation for a deep learning prognostic model (EfficientNet-Grad1)
The experiments have been performed on three publicly available datasets namely Face Research Lab London Set, Wide Multi-Channel Presentation Attack (WMCA) and Makeup Induced Face Spoofing (MIFS)
arXiv Detail & Related papers (2023-04-23T13:43:06Z) - Face Presentation Attack Detection [59.05779913403134]
Face recognition technology has been widely used in daily interactive applications such as checking-in and mobile payment.
However, its vulnerability to presentation attacks (PAs) limits its reliable use in ultra-secure applicational scenarios.
arXiv Detail & Related papers (2022-12-07T14:51:17Z) - Attribute Inference Attack of Speech Emotion Recognition in Federated
Learning Settings [56.93025161787725]
Federated learning (FL) is a distributed machine learning paradigm that coordinates clients to train a model collaboratively without sharing local data.
We propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or model parameters.
We show that the attribute inference attack is achievable for SER systems trained using FL.
arXiv Detail & Related papers (2021-12-26T16:50:42Z) - INADVERT: An Interactive and Adaptive Counterdeception Platform for
Attention Enhancement and Phishing Prevention [28.570086492742046]
INADVERT is a systematic solution that generates interactive visual aids in real-time to prevent users from inadvertence and counter visual-deception attacks.
Based on the eye-tracking outcomes and proper data compression, the INADVERT platform automatically adapts the visual aids to the user's varying attention status captured by the gaze location and duration.
arXiv Detail & Related papers (2021-06-13T03:52:55Z) - Facial Masks and Soft-Biometrics: Leveraging Face Recognition CNNs for
Age and Gender Prediction on Mobile Ocular Images [53.913598771836924]
We address the use of selfie ocular images captured with smartphones to estimate age and gender.
We adapt two existing lightweight CNNs proposed in the context of the ImageNet Challenge.
Some networks are further pre-trained for face recognition, for which very large training databases are available.
arXiv Detail & Related papers (2021-03-31T01:48:29Z) - Aurora Guard: Reliable Face Anti-Spoofing via Mobile Lighting System [103.5604680001633]
Anti-spoofing against high-resolution rendering replay of paper photos or digital videos remains an open problem.
We propose a simple yet effective face anti-spoofing system, termed Aurora Guard (AG)
arXiv Detail & Related papers (2021-02-01T09:17:18Z) - AuthNet: A Deep Learning based Authentication Mechanism using Temporal
Facial Feature Movements [0.0]
We propose an alternative authentication mechanism that uses both facial recognition and the unique movements of that particular face while uttering a password.
The proposed model is not inhibited by language barriers because a user can set a password in any language.
arXiv Detail & Related papers (2020-12-04T10:46:12Z) - Robust Attacks on Deep Learning Face Recognition in the Physical World [48.909604306342544]
FaceAdv is a physical-world attack that crafts adversarial stickers to deceive FR systems.
It mainly consists of a sticker generator and a transformer, where the former can craft several stickers with different shapes.
We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems.
arXiv Detail & Related papers (2020-11-27T02:24:43Z) - EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks [68.01125081367428]
Recent studies have shown that machine learning algorithms are vulnerable to adversarial attacks.
This article proposes to use narrow period pulse for poisoning attack of EEG-based BCIs, which is implementable in practice and has never been considered before.
arXiv Detail & Related papers (2020-10-30T20:49:42Z) - Adversarial Attacks on Deep Learning Systems for User Identification
based on Motion Sensors [24.182791316595576]
This study focuses on deep learning methods for explicit authentication based on motion sensor signals.
In this scenario, attackers could craft adversarial examples with the aim of gaining unauthorized access.
To our knowledge, this is the first study that aims at quantifying the impact of adversarial attacks on machine learning models.
arXiv Detail & Related papers (2020-09-02T14:35:05Z) - Toward Building Safer Smart Homes for the People with Disabilities [1.0742675209112622]
"SafeAccess" is an end-to-end assistive solution to build a safer smart home by providing situational awareness.
We focus on building a robust model for detecting and recognizing person, generating image descriptions, and designing a prototype for the smart door.
The system notifies users with an MMS containing the name of incoming persons or as "unknown", scene image, facial description, and contextual information.
Our system identifies persons with an F-score 0.97 and recognizes items to generate image description with an average F-score 0.97.
arXiv Detail & Related papers (2020-06-10T15:50:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.