Crafter: Facial Feature Crafting against Inversion-based Identity Theft
on Deep Models
- URL: http://arxiv.org/abs/2401.07205v1
- Date: Sun, 14 Jan 2024 05:06:42 GMT
- Title: Crafter: Facial Feature Crafting against Inversion-based Identity Theft
on Deep Models
- Authors: Shiming Wang, Zhe Ji, Liyao Xiang, Hao Zhang, Xinbing Wang, Chenghu
Zhou, Bo Li
- Abstract summary: A typical application is to run machine learning services on facial images collected from different individuals.
To prevent identity theft, conventional methods rely on an adversarial game-based approach to shed the identity information from the feature.
We propose Crafter, a feature crafting mechanism deployed at the edge, to protect the identity information from adaptive model attacks.
- Score: 45.398313126020284
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the increased capabilities at the edge (e.g., mobile device) and more
stringent privacy requirement, it becomes a recent trend for deep
learning-enabled applications to pre-process sensitive raw data at the edge and
transmit the features to the backend cloud for further processing. A typical
application is to run machine learning (ML) services on facial images collected
from different individuals. To prevent identity theft, conventional methods
commonly rely on an adversarial game-based approach to shed the identity
information from the feature. However, such methods can not defend against
adaptive attacks, in which an attacker takes a countermove against a known
defence strategy. We propose Crafter, a feature crafting mechanism deployed at
the edge, to protect the identity information from adaptive model inversion
attacks while ensuring the ML tasks are properly carried out in the cloud. The
key defence strategy is to mislead the attacker to a non-private prior from
which the attacker gains little about the private identity. In this case, the
crafted features act like poison training samples for attackers with adaptive
model updates. Experimental results indicate that Crafter successfully defends
both basic and possible adaptive attacks, which can not be achieved by
state-of-the-art adversarial game-based methods.
Related papers
- Defense Against Prompt Injection Attack by Leveraging Attack Techniques [66.65466992544728]
Large language models (LLMs) have achieved remarkable performance across various natural language processing (NLP) tasks.
As LLMs continue to evolve, new vulnerabilities, especially prompt injection attacks arise.
Recent attack methods leverage LLMs' instruction-following abilities and their inabilities to distinguish instructions injected in the data content.
arXiv Detail & Related papers (2024-11-01T09:14:21Z) - A Practical Trigger-Free Backdoor Attack on Neural Networks [33.426207982772226]
We propose a trigger-free backdoor attack that does not require access to any training data.
Specifically, we design a novel fine-tuning approach that incorporates the concept of malicious data into the concept of the attacker-specified class.
The effectiveness, practicality, and stealthiness of the proposed attack are evaluated on three real-world datasets.
arXiv Detail & Related papers (2024-08-21T08:53:36Z) - A Stealthy Wrongdoer: Feature-Oriented Reconstruction Attack against Split Learning [14.110303634976272]
Split Learning (SL) is a distributed learning framework renowned for its privacy-preserving features and minimal computational requirements.
Previous research consistently highlights the potential privacy breaches in SL systems by server adversaries reconstructing training data.
This paper introduces a new semi-honest Data Reconstruction Attack on SL, named Feature-Oriented Reconstruction Attack (FORA)
arXiv Detail & Related papers (2024-05-07T08:38:35Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Improving behavior based authentication against adversarial attack using XAI [3.340314613771868]
We propose an eXplainable AI (XAI) based defense strategy against adversarial attacks in such scenarios.
A feature selector, trained with our method, can be used as a filter in front of the original authenticator.
We demonstrate that our XAI based defense strategy is effective against adversarial attacks and outperforms other defense strategies.
arXiv Detail & Related papers (2024-02-26T09:29:05Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Adversary Aware Continual Learning [3.3439097577935213]
Adversary can introduce small amount of misinformation to the model to cause deliberate forgetting of a specific task or class at test time.
We use the attacker's primary strength-hiding the backdoor pattern by making it imperceptible to humans-against it, and propose to learn a perceptible (stronger) pattern that can overpower the attacker's imperceptible pattern.
We show that our proposed defensive framework considerably improves the performance of class incremental learning algorithms with no knowledge of the attacker's target task, attacker's target class, and attacker's imperceptible pattern.
arXiv Detail & Related papers (2023-04-27T19:49:50Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - I Know What You Trained Last Summer: A Survey on Stealing Machine
Learning Models and Defences [0.1031296820074812]
We study model stealing attacks, assessing their performance and exploring corresponding defence techniques in different settings.
We propose a taxonomy for attack and defence approaches, and provide guidelines on how to select the right attack or defence based on the goal and available resources.
arXiv Detail & Related papers (2022-06-16T21:16:41Z) - Defense Against Gradient Leakage Attacks via Learning to Obscure Data [48.67836599050032]
Federated learning is considered as an effective privacy-preserving learning mechanism.
In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data.
arXiv Detail & Related papers (2022-06-01T21:03:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.