Attacker Attribution of Audio Deepfakes
- URL: http://arxiv.org/abs/2203.15563v1
- Date: Mon, 28 Mar 2022 09:25:31 GMT
- Title: Attacker Attribution of Audio Deepfakes
- Authors: Nicolas M. M\"uller and Franziska Dieckmann and Jennifer Williams
- Abstract summary: Deepfakes are synthetically generated media often devised with malicious intent.
Recent work is almost exclusively limited to deepfake detection - predicting if audio is real or fake.
This is despite the fact that attribution (who created which fake?) is an essential building block of a larger defense strategy.
- Score: 5.070542698701158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deepfakes are synthetically generated media often devised with malicious
intent. They have become increasingly more convincing with large training
datasets advanced neural networks. These fakes are readily being misused for
slander, misinformation and fraud. For this reason, intensive research for
developing countermeasures is also expanding. However, recent work is almost
exclusively limited to deepfake detection - predicting if audio is real or
fake. This is despite the fact that attribution (who created which fake?) is an
essential building block of a larger defense strategy, as practiced in the
field of cybersecurity for a long time. This paper considers the problem of
deepfake attacker attribution in the domain of audio. We present several
methods for creating attacker signatures using low-level acoustic descriptors
and machine learning embeddings. We show that speech signal features are
inadequate for characterizing attacker signatures. However, we also demonstrate
that embeddings from a recurrent neural network can successfully characterize
attacks from both known and unknown attackers. Our attack signature embeddings
result in distinct clusters, both for seen and unseen audio deepfakes. We show
that these embeddings can be used in downstream-tasks to high-effect, scoring
97.10% accuracy in attacker-id classification.
Related papers
- Forging the Forger: An Attempt to Improve Authorship Verification via Data Augmentation [52.72682366640554]
Authorship Verification (AV) is a text classification task concerned with inferring whether a candidate text has been written by one specific author or by someone else.
It has been shown that many AV systems are vulnerable to adversarial attacks, where a malicious author actively tries to fool the classifier by either concealing their writing style, or by imitating the style of another author.
arXiv Detail & Related papers (2024-03-17T16:36:26Z) - Real is not True: Backdoor Attacks Against Deepfake Detection [9.572726483706846]
We introduce a pioneering paradigm denominated as Bad-Deepfake, which represents a novel foray into the realm of backdoor attacks levied against deepfake detectors.
Our approach hinges upon the strategic manipulation of a subset of the training data, enabling us to wield disproportionate influence over the operational characteristics of a trained model.
arXiv Detail & Related papers (2024-03-11T10:57:14Z) - Does Few-shot Learning Suffer from Backdoor Attacks? [63.9864247424967]
We show that few-shot learning can still be vulnerable to backdoor attacks.
Our method demonstrates a high Attack Success Rate (ASR) in FSL tasks with different few-shot learning paradigms.
This study reveals that few-shot learning still suffers from backdoor attacks, and its security should be given attention.
arXiv Detail & Related papers (2023-12-31T06:43:36Z) - Towards Stealthy Backdoor Attacks against Speech Recognition via
Elements of Sound [9.24846124692153]
Deep neural networks (DNNs) have been widely and successfully adopted and deployed in various applications of speech recognition.
In this paper, we revisit poison-only backdoor attacks against speech recognition.
We exploit elements of sound ($e.g.$, pitch and timbre) to design more stealthy yet effective poison-only backdoor attacks.
arXiv Detail & Related papers (2023-07-17T02:58:25Z) - Defense Against Adversarial Attacks on Audio DeepFake Detection [0.4511923587827302]
Audio DeepFakes (DF) are artificially generated utterances created using deep learning.
Multiple neural network-based methods to detect generated speech have been proposed to prevent the threats.
arXiv Detail & Related papers (2022-12-30T08:41:06Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Adversarial Threats to DeepFake Detection: A Practical Perspective [12.611342984880826]
We study the vulnerabilities of state-of-the-art DeepFake detection methods from a practical stand point.
We create more accessible attacks using Universal Adversarial Perturbations which pose a very feasible attack scenario.
arXiv Detail & Related papers (2020-11-19T16:53:38Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.