Emerging Biometric Modalities and their Use: Loopholes in the
Terminology of the GDPR and Resulting Privacy Risks
- URL: http://arxiv.org/abs/2211.12899v1
- Date: Wed, 23 Nov 2022 12:04:05 GMT
- Title: Emerging Biometric Modalities and their Use: Loopholes in the
Terminology of the GDPR and Resulting Privacy Risks
- Authors: Tamas Bisztray, Nils Gruschka, Thirimachos Bourlai, Lothar Fritsch
- Abstract summary: We argue that in the current EU data protection regulation, classification applications using biometric data receive less protection compared to biometric recognition.
This has the potential to be the source of unique privacy risks for processing operations classifying individuals based on soft traits like emotions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Technological advancements allow biometric applications to be more
omnipresent than in any other time before. This paper argues that in the
current EU data protection regulation, classification applications using
biometric data receive less protection compared to biometric recognition. We
analyse preconditions in the regulatory language and explore how this has the
potential to be the source of unique privacy risks for processing operations
classifying individuals based on soft traits like emotions. This can have high
impact on personal freedoms and human rights and therefore, should be subject
to data protection impact assessment.
Related papers
- Model-Agnostic Utility-Preserving Biometric Information Anonymization [9.413512346732768]
The recent rapid advancements in both sensing and machine learning technologies have given rise to the universal collection and utilization of people's biometrics.
The use of biometrics has raised serious privacy concerns due to their intrinsic sensitive nature and the accompanying high risk of leaking sensitive information.
We propose a novel modality-agnostic data transformation framework that is capable of anonymizing biometric data by suppressing its sensitive attributes and retaining features relevant to downstream machine learning-based analyses.
arXiv Detail & Related papers (2024-05-23T21:21:40Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - Biometric Technologies and the Law: Developing a Taxonomy for Guiding
Policymakers [0.0]
This study proposes a taxonomy of biometric technologies that can aid in their effective deployment and supervision.
The resulting taxonomy can enhance the understanding of biometric technologies and facilitate the development of regulation that prioritises privacy and personal data protection.
arXiv Detail & Related papers (2023-10-27T10:23:46Z) - Reversing Deep Face Embeddings with Probable Privacy Protection [6.492755549391469]
State-of-the-art face image reconstruction approach has been evaluated on protected face embeddings to break soft biometric privacy protection.
Results show that biometric privacy-enhanced face embeddings can be reconstructed with an accuracy of up to approximately 98%.
arXiv Detail & Related papers (2023-10-04T17:48:23Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - PABAU: Privacy Analysis of Biometric API Usage [0.0]
Biometric data privacy is becoming a major concern for many organizations in the age of big data.
Biometric data privacy is becoming a major concern for many organizations in the age of big data.
arXiv Detail & Related papers (2022-12-21T09:08:19Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - An Overview of Privacy-enhancing Technologies in Biometric Recognition [12.554656658516262]
This work provides an overview of concepts of privacy-enhancing technologies for biometrics in a unified framework.
Fundamental properties and limitations of existing approaches are discussed and related to data protection techniques and principles.
This paper is meant as a point of entry to the field of biometric data protection and is directed towards experienced researchers as well as non-experts.
arXiv Detail & Related papers (2022-06-21T15:21:29Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - Epidemic mitigation by statistical inference from contact tracing data [61.04165571425021]
We develop Bayesian inference methods to estimate the risk that an individual is infected.
We propose to use probabilistic risk estimation in order to optimize testing and quarantining strategies for the control of an epidemic.
Our approaches translate into fully distributed algorithms that only require communication between individuals who have recently been in contact.
arXiv Detail & Related papers (2020-09-20T12:24:45Z) - COVI White Paper [67.04578448931741]
Contact tracing is an essential tool to change the course of the Covid-19 pandemic.
We present an overview of the rationale, design, ethical considerations and privacy strategy of COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.
arXiv Detail & Related papers (2020-05-18T07:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.