Generalized Zero and Few-Shot Transfer for Facial Forgery Detection
- URL: http://arxiv.org/abs/2006.11863v1
- Date: Sun, 21 Jun 2020 18:10:52 GMT
- Title: Generalized Zero and Few-Shot Transfer for Facial Forgery Detection
- Authors: Shivangi Aneja and Matthias Nie{\ss}ner
- Abstract summary: We propose a new transfer learning approach to address the problem of zero and few-shot transfer in the context of forgery detection.
We find this learning strategy to be surprisingly effective at domain transfer compared to a traditional classification or even state-of-the-art domain adaptation/few-shot learning methods.
- Score: 3.8073142980733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose Deep Distribution Transfer(DDT), a new transfer learning approach
to address the problem of zero and few-shot transfer in the context of facial
forgery detection. We examine how well a model (pre-)trained with one forgery
creation method generalizes towards a previously unseen manipulation technique
or different dataset. To facilitate this transfer, we introduce a new mixture
model-based loss formulation that learns a multi-modal distribution, with modes
corresponding to class categories of the underlying data of the source forgery
method. Our core idea is to first pre-train an encoder neural network, which
maps each mode of this distribution to the respective class labels, i.e., real
or fake images in the source domain by minimizing wasserstein distance between
them. In order to transfer this model to a new domain, we associate a few
target samples with one of the previously trained modes. In addition, we
propose a spatial mixup augmentation strategy that further helps generalization
across domains. We find this learning strategy to be surprisingly effective at
domain transfer compared to a traditional classification or even
state-of-the-art domain adaptation/few-shot learning methods. For instance,
compared to the best baseline, our method improves the classification accuracy
by 4.88% for zero-shot and by 8.38% for the few-shot case transferred from the
FaceForensics++ to Dessa dataset.
Related papers
- Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - Multivariate Prototype Representation for Domain-Generalized Incremental
Learning [35.83706574551515]
We design a DGCIL approach that remembers old classes, adapts to new classes, and can classify reliably objects from unseen domains.
Our loss formulation maintains classification boundaries and suppresses the domain-specific information of each class.
arXiv Detail & Related papers (2023-09-24T06:42:04Z) - DC4L: Distribution Shift Recovery via Data-Driven Control for Deep Learning Models [4.374569172244273]
We propose to use control for learned models to recover from distribution shifts online.
Our method applies a sequence of semantic-preserving transformations to bring the shifted data closer in distribution to the training set.
We show that our method generalizes to composites of shifts from the ImageNet-C benchmark, achieving improvements in average accuracy of up to 9.81%.
arXiv Detail & Related papers (2023-02-20T22:06:26Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Unsupervised Domain Adaptation Using Feature Disentanglement And GCNs
For Medical Image Classification [5.6512908295414]
We propose an unsupervised domain adaptation approach that uses graph neural networks and, disentangled semantic and domain invariant structural features.
We test the proposed method for classification on two challenging medical image datasets with distribution shifts.
Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods.
arXiv Detail & Related papers (2022-06-27T09:02:16Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Deep learning based domain adaptation for mitochondria segmentation on
EM volumes [5.682594415267948]
We present three unsupervised domain adaptation strategies to improve mitochondria segmentation in the target domain.
We propose a new training stopping criterion based on morphological priors obtained exclusively in the source domain.
In the absence of validation labels, monitoring our proposed morphology-based metric is an intuitive and effective way to stop the training process and select in average optimal models.
arXiv Detail & Related papers (2022-02-22T09:49:25Z) - Head2Toe: Utilizing Intermediate Representations for Better Transfer
Learning [31.171051511744636]
Transfer-learning methods aim to improve performance in a data-scarce target domain using a model pretrained on a data-rich source domain.
We propose a method, Head-to-Toe probing (Head2Toe), that selects features from all layers of the source model to train a classification head for the target-domain.
arXiv Detail & Related papers (2022-01-10T18:40:07Z) - Ranking Distance Calibration for Cross-Domain Few-Shot Learning [91.22458739205766]
Recent progress in few-shot learning promotes a more realistic cross-domain setting.
Due to the domain gap and disjoint label spaces between source and target datasets, their shared knowledge is extremely limited.
We employ a re-ranking process for calibrating a target distance matrix by discovering the reciprocal k-nearest neighbours within the task.
arXiv Detail & Related papers (2021-12-01T03:36:58Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Embedding Propagation: Smoother Manifold for Few-Shot Classification [131.81692677836202]
We propose to use embedding propagation as an unsupervised non-parametric regularizer for manifold smoothing in few-shot classification.
We empirically show that embedding propagation yields a smoother embedding manifold.
We show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16% points.
arXiv Detail & Related papers (2020-03-09T13:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.