EarCapAuth: Biometric Method for Earables Using Capacitive Sensing Eartips
- URL: http://arxiv.org/abs/2411.04657v1
- Date: Thu, 07 Nov 2024 12:35:02 GMT
- Title: EarCapAuth: Biometric Method for Earables Using Capacitive Sensing Eartips
- Authors: Richard Hanser, Tobias Röddiger, Till Riedel, Michael Beigl,
- Abstract summary: EarCapAuth is an authentication mechanism using 48 capacitive electrodes embedded into the soft silicone eartips of two earables.
For identification, EarCapAuth achieves 89.95%. This outperforms some earable biometric principles from related work.
In the future, EarCapAuth could be integrated into high-resolution brain sensing electrode tips.
- Score: 5.958529344837165
- License:
- Abstract: Earphones can give access to sensitive information via voice assistants which demands security methods that prevent unauthorized use. Therefore, we developed EarCapAuth, an authentication mechanism using 48 capacitive electrodes embedded into the soft silicone eartips of two earables. For evaluation, we gathered capactive ear canal measurements from 20 participants in 20 wearing sessions (12 at rest, 8 while walking). A per user classifier trained for authentication achieves an EER of 7.62% and can be tuned to a FAR (False Acceptance Rate) of 1% at FRR (False Rejection Rate) of 16.14%. For identification, EarCapAuth achieves 89.95%. This outperforms some earable biometric principles from related work. Performance under motion slightly decreased to 9.76% EER for authentication and 86.40% accuracy for identification. Enrollment can be performed rapidly with multiple short earpiece insertions and a biometric decision is made every 0.33s. In the future, EarCapAuth could be integrated into high-resolution brain sensing electrode tips.
Related papers
- Advancing Ear Biometrics: Enhancing Accuracy and Robustness through Deep Learning [0.9910347287556193]
Biometric identification is a reliable method to verify individuals based on their unique physical or behavioral traits.
This study focuses on ear biometric identification, exploiting its distinctive features for enhanced accuracy, reliability, and usability.
arXiv Detail & Related papers (2024-05-31T18:55:10Z) - EarPass: Secure and Implicit Call Receiver Authentication Using Ear Acoustic Sensing [14.78387043362623]
EarPass is a secure and implicit call receiver authentication scheme for smartphones.
It sends inaudible acoustic signals through the earpiece speaker to actively sense the outer ear.
It can achieve a balanced accuracy of 96.95% and an equal error rate of 1.53%.
arXiv Detail & Related papers (2024-04-23T13:03:09Z) - Ear-Keeper: Real-time Diagnosis of Ear Lesions Utilizing Ultralight-Ultrafast ConvNet and Large-scale Ear Endoscopic Dataset [7.5179664143779075]
We propose Best-EarNet, an ultrafast and ultralight network enabling real-time ear disease diagnosis.
The accuracy of Best-EarNet with only 0.77M parameters achieves 95.23% (internal 22,581 images) and 92.14% (external 1,652 images)
Ear-Keeper, an intelligent diagnosis system based Best-EarNet, was developed successfully and deployed on common electronic devices.
arXiv Detail & Related papers (2023-08-21T10:20:46Z) - Facial Soft Biometrics for Recognition in the Wild: Recent Works,
Annotation, and COTS Evaluation [63.05890836038913]
We study the role of soft biometrics to enhance person recognition systems in unconstrained scenarios.
We consider two assumptions: 1) manual estimation of soft biometrics and 2) automatic estimation from two commercial off-the-shelf systems.
Experiments are carried out fusing soft biometrics with two state-of-the-art face recognition systems based on deep learning.
arXiv Detail & Related papers (2022-10-24T11:29:57Z) - Transfer learning using deep neural networks for Ear Presentation Attack
Detection: New Database for PAD [0.0]
There is no publicly available ear presentation attack detection (PAD) database.
We propose a PAD method using a pre-trained deep neural network and release a new dataset called Warsaw University of Technology Ear for Presentation Attack Detection (WUT-Ear V1.0)
We have captured more than 8500 genuine ear images from 134 subjects and more than 8500 fake ear images using.
arXiv Detail & Related papers (2021-12-09T22:34:26Z) - A Generic Deep Learning Based Cough Analysis System from Clinically
Validated Samples for Point-of-Need Covid-19 Test and Severity Levels [85.41238731489939]
We seek to evaluate the detection performance of a rapid primary screening tool of Covid-19 based on the cough sound from 8,380 clinically validated samples.
Our proposed generic method is an algorithm based on Empirical Mode Decomposition (EMD) with subsequent classification based on a tensor of audio features.
Two different versions of DeepCough based on the number of tensor dimensions, i.e. DeepCough2D and DeepCough3D, have been investigated.
arXiv Detail & Related papers (2021-11-10T19:39:26Z) - Project Achoo: A Practical Model and Application for COVID-19 Detection
from Recordings of Breath, Voice, and Cough [55.45063681652457]
We propose a machine learning method to quickly triage COVID-19 using recordings made on consumer devices.
The approach combines signal processing methods with fine-tuned deep learning networks and provides methods for signal denoising, cough detection and classification.
We have also developed and deployed a mobile application that uses symptoms checker together with voice, breath and cough signals to detect COVID-19 infection.
arXiv Detail & Related papers (2021-07-12T08:07:56Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z) - Exploring Deep Learning for Joint Audio-Visual Lip Biometrics [54.32039064193566]
Audio-visual (AV) lip biometrics is a promising authentication technique that leverages the benefits of both the audio and visual modalities in speech communication.
The lack of a sizeable AV database hinders the exploration of deep-learning-based audio-visual lip biometrics.
We establish the DeepLip AV lip biometrics system realized with a convolutional neural network (CNN) based video module, a time-delay neural network (TDNN) based audio module, and a multimodal fusion module.
arXiv Detail & Related papers (2021-04-17T10:51:55Z) - Ear Recognition [0.0]
Ear biometrics have been proven to be mostly non-invasive, adequately permanent and accurate.
Different ear recognition techniques have proven to be as effective as face recognition ones.
arXiv Detail & Related papers (2021-01-26T03:26:00Z) - Detecting COVID-19 from Breathing and Coughing Sounds using Deep Neural
Networks [68.8204255655161]
We adapt an ensemble of Convolutional Neural Networks to classify if a speaker is infected with COVID-19 or not.
Ultimately, it achieves an Unweighted Average Recall (UAR) of 74.9%, or an Area Under ROC Curve (AUC) of 80.7% by ensembling neural networks.
arXiv Detail & Related papers (2020-12-29T01:14:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.