Attacks on Deidentification's Defenses
- URL: http://arxiv.org/abs/2202.13470v1
- Date: Sun, 27 Feb 2022 22:50:36 GMT
- Title: Attacks on Deidentification's Defenses
- Authors: Aloni Cohen
- Abstract summary: We present three new attacks on Quasi-identifier-based deidentification techniques.
First, we introduce a new class of privacy attacks called downcoding attacks.
Second, we convert the downcoding attacks into powerful predicate singling-out attacks.
Third, we use LinkedIn.com to reidentify 3 students in a $k$-anonymized dataset published by EdX.
- Score: 0.4974890682815778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quasi-identifier-based deidentification techniques (QI-deidentification) are
widely used in practice, including $k$-anonymity, $\ell$-diversity, and
$t$-closeness. We present three new attacks on QI-deidentification: two
theoretical attacks and one practical attack on a real dataset. In contrast to
prior work, our theoretical attacks work even if every attribute is a
quasi-identifier. Hence, they apply to $k$-anonymity, $\ell$-diversity,
$t$-closeness, and most other QI-deidentification techniques.
First, we introduce a new class of privacy attacks called downcoding attacks,
and prove that every QI-deidentification scheme is vulnerable to downcoding
attacks if it is minimal and hierarchical. Second, we convert the downcoding
attacks into powerful predicate singling-out (PSO) attacks, which were recently
proposed as a way to demonstrate that a privacy mechanism fails to legally
anonymize under Europe's General Data Protection Regulation. Third, we use
LinkedIn.com to reidentify 3 students in a $k$-anonymized dataset published by
EdX (and show thousands are potentially vulnerable), undermining EdX's claimed
compliance with the Family Educational Rights and Privacy Act.
The significance of this work is both scientific and political. Our
theoretical attacks demonstrate that QI-deidentification may offer no
protection even if every attribute is treated as a quasi-identifier. Our
practical attack demonstrates that even deidentification experts acting in
accordance with strict privacy regulations fail to prevent real-world
reidentification. Together, they rebut a foundational tenet of
QI-deidentification and challenge the actual arguments made to justify the
continued use of $k$-anonymity and other QI-deidentification techniques.
Related papers
- The Good, the Bad and the Ugly: Watermarks, Transferable Attacks and Adversarial Defenses [21.975560789792073]
We formalize and extend existing definitions of backdoor-based watermarks and adversarial defenses as interactive protocols between two players.
For almost every discriminative learning task, at least one of the two -- a watermark or an adversarial defense -- exists.
We show that any task that satisfies our notion of a transferable attack implies a cryptographic primitive.
arXiv Detail & Related papers (2024-10-11T14:44:05Z) - Data Reconstruction: When You See It and When You Don't [75.03157721978279]
We aim to "sandwich" the concept of reconstruction attacks by addressing two complementing questions.
We introduce a new definitional paradigm -- Narcissus Resiliency -- to formulate a security definition for protection against reconstruction attacks.
arXiv Detail & Related papers (2024-05-24T17:49:34Z) - PRAT: PRofiling Adversarial aTtacks [52.693011665938734]
We introduce a novel problem of PRofiling Adversarial aTtacks (PRAT)
Given an adversarial example, the objective of PRAT is to identify the attack used to generate it.
We use AID to devise a novel framework for the PRAT objective.
arXiv Detail & Related papers (2023-09-20T07:42:51Z) - Conditional Generative Adversarial Network for keystroke presentation
attack [0.0]
We propose to study a new approach aiming to deploy a presentation attack towards a keystroke authentication system.
Our idea is to use Conditional Generative Adversarial Networks (cGAN) for generating synthetic keystroke data that can be used for impersonating an authorized user.
Results indicate that the cGAN can effectively generate keystroke dynamics patterns that can be used for deceiving keystroke authentication systems.
arXiv Detail & Related papers (2022-12-16T12:45:16Z) - Invisible Backdoor Attack with Dynamic Triggers against Person
Re-identification [71.80885227961015]
Person Re-identification (ReID) has rapidly progressed with wide real-world applications, but also poses significant risks of adversarial attacks.
We propose a novel backdoor attack on ReID under a new all-to-unknown scenario, called Dynamic Triggers Invisible Backdoor Attack (DT-IBA)
We extensively validate the effectiveness and stealthiness of the proposed attack on benchmark datasets, and evaluate the effectiveness of several defense methods against our attack.
arXiv Detail & Related papers (2022-11-20T10:08:28Z) - Attacking Face Recognition with T-shirts: Database, Vulnerability
Assessment and Detection [0.0]
We propose a new T-shirt Face Presentation Attack database of 1,608 T-shirt attacks using 100 unique presentation attack instruments.
We show that this type of attack can compromise the security of face recognition systems and that some state-of-the-art attack detection mechanisms fail to robustly generalize to the new attacks.
arXiv Detail & Related papers (2022-11-14T14:11:23Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Label-Only Membership Inference Attacks [67.46072950620247]
We introduce label-only membership inference attacks.
Our attacks evaluate the robustness of a model's predicted labels under perturbations.
We find that training models with differential privacy and (strong) L2 regularization are the only known defense strategies.
arXiv Detail & Related papers (2020-07-28T15:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.