CoReD: Generalizing Fake Media Detection with Continual Representation
using Distillation
- URL: http://arxiv.org/abs/2107.02408v1
- Date: Tue, 6 Jul 2021 06:07:17 GMT
- Title: CoReD: Generalizing Fake Media Detection with Continual Representation
using Distillation
- Authors: Minha Kim and Shahroz Tariq and Simon S. Woo
- Abstract summary: We propose Continual Representation using Distillation (CoReD) method that employs the concept of Continual Learning (CoL), Representation Learning (ReL), and Knowledge Distillation (KD)
We design CoReD to perform sequential domain adaptation tasks on new deepfake and GAN-generated synthetic face datasets.
Our extensive experimental results demonstrate that our method is efficient at domain adaptation to detect low-quality deepfakes videos and GAN-generated images.
- Score: 17.97648576135166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last few decades, artificial intelligence research has made
tremendous strides, but it still heavily relies on fixed datasets in stationary
environments. Continual learning is a growing field of research that examines
how AI systems can learn sequentially from a continuous stream of linked data
in the same way that biological systems do. Simultaneously, fake media such as
deepfakes and synthetic face images have emerged as significant to current
multimedia technologies. Recently, numerous method has been proposed which can
detect deepfakes with high accuracy. However, they suffer significantly due to
their reliance on fixed datasets in limited evaluation settings. Therefore, in
this work, we apply continuous learning to neural networks' learning dynamics,
emphasizing its potential to increase data efficiency significantly. We propose
Continual Representation using Distillation (CoReD) method that employs the
concept of Continual Learning (CoL), Representation Learning (ReL), and
Knowledge Distillation (KD). We design CoReD to perform sequential domain
adaptation tasks on new deepfake and GAN-generated synthetic face datasets,
while effectively minimizing the catastrophic forgetting in a teacher-student
model setting. Our extensive experimental results demonstrate that our method
is efficient at domain adaptation to detect low-quality deepfakes videos and
GAN-generated images from several datasets, outperforming the-state-of-art
baseline methods.
Related papers
- Improving Deep Learning-based Automatic Cranial Defect Reconstruction by Heavy Data Augmentation: From Image Registration to Latent Diffusion Models [0.2911706166691895]
The work is a considerable contribution to the field of artificial intelligence in the automatic modeling of personalized cranial implants.
We show that the use of heavy data augmentation significantly increases both the quantitative and qualitative outcomes.
We also show that the synthetically augmented network successfully reconstructs real clinical defects.
arXiv Detail & Related papers (2024-06-10T15:34:23Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - BOOT: Data-free Distillation of Denoising Diffusion Models with
Bootstrapping [64.54271680071373]
Diffusion models have demonstrated excellent potential for generating diverse images.
Knowledge distillation has been recently proposed as a remedy that can reduce the number of inference steps to one or a few.
We present a novel technique called BOOT, that overcomes limitations with an efficient data-free distillation algorithm.
arXiv Detail & Related papers (2023-06-08T20:30:55Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - A Comparative Study of Data Augmentation Techniques for Deep Learning
Based Emotion Recognition [11.928873764689458]
We conduct a comprehensive evaluation of popular deep learning approaches for emotion recognition.
We show that long-range dependencies in the speech signal are critical for emotion recognition.
Speed/rate augmentation offers the most robust performance gain across models.
arXiv Detail & Related papers (2022-11-09T17:27:03Z) - FakeCLR: Exploring Contrastive Learning for Solving Latent Discontinuity
in Data-Efficient GANs [24.18718734850797]
Data-Efficient GANs (DE-GANs) aim to learn generative models with a limited amount of training data.
Contrastive learning has shown the great potential of increasing the synthesis quality of DE-GANs.
We propose FakeCLR, which only applies contrastive learning on fake samples.
arXiv Detail & Related papers (2022-07-18T14:23:38Z) - FReTAL: Generalizing Deepfake Detection using Knowledge Distillation and
Representation Learning [17.97648576135166]
We introduce a transfer learning-based Feature Representation Transfer Adaptation Learning (FReTAL) method.
Our student model can quickly adapt to new types of deepfake by distilling knowledge from a pre-trained teacher model.
FReTAL outperforms all baselines on the domain adaptation task with up to 86.97% accuracy on low-quality deepfakes.
arXiv Detail & Related papers (2021-05-28T06:54:10Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z) - Dataset Condensation with Gradient Matching [36.14340188365505]
We propose a training set synthesis technique for data-efficient learning, called dataset Condensation, that learns to condense large dataset into a small set of informative synthetic samples for training deep neural networks from scratch.
We rigorously evaluate its performance in several computer vision benchmarks and demonstrate that it significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2020-06-10T16:30:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.