Controllable Guide-Space for Generalizable Face Forgery Detection
- URL: http://arxiv.org/abs/2307.14039v1
- Date: Wed, 26 Jul 2023 08:43:12 GMT
- Title: Controllable Guide-Space for Generalizable Face Forgery Detection
- Authors: Ying Guo, Cheng Zhen, Pengfei Yan
- Abstract summary: We propose a controllable guide-space (GS) method to enhance the discrimination of different forgery domains.
The well-designed guide-space can simultaneously achieve both the proper separation of forgery domains and the large distance between real-forgery domains.
- Score: 0.6445605125467573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies on face forgery detection have shown satisfactory performance
for methods involved in training datasets, but are not ideal enough for unknown
domains. This motivates many works to improve the generalization, but
forgery-irrelevant information, such as image background and identity, still
exists in different domain features and causes unexpected clustering, limiting
the generalization. In this paper, we propose a controllable guide-space (GS)
method to enhance the discrimination of different forgery domains, so as to
increase the forgery relevance of features and thereby improve the
generalization. The well-designed guide-space can simultaneously achieve both
the proper separation of forgery domains and the large distance between
real-forgery domains in an explicit and controllable manner. Moreover, for
better discrimination, we use a decoupling module to weaken the interference of
forgery-irrelevant correlations between domains. Furthermore, we make
adjustments to the decision boundary manifold according to the clustering
degree of the same domain features within the neighborhood. Extensive
experiments in multiple in-domain and cross-domain settings confirm that our
method can achieve state-of-the-art generalization.
Related papers
- Generalize or Detect? Towards Robust Semantic Segmentation Under Multiple Distribution Shifts [56.57141696245328]
In open-world scenarios, where both novel classes and domains may exist, an ideal segmentation model should detect anomaly classes for safety.
Existing methods often struggle to distinguish between domain-level and semantic-level distribution shifts.
arXiv Detail & Related papers (2024-11-06T11:03:02Z) - DIGIC: Domain Generalizable Imitation Learning by Causal Discovery [69.13526582209165]
Causality has been combined with machine learning to produce robust representations for domain generalization.
We make a different attempt by leveraging the demonstration data distribution to discover causal features for a domain generalizable policy.
We design a novel framework, called DIGIC, to identify the causal features by finding the direct cause of the expert action from the demonstration data distribution.
arXiv Detail & Related papers (2024-02-29T07:09:01Z) - Domain-aware Triplet loss in Domain Generalization [0.0]
Domain shift is caused by discrepancies in the distributions of the testing and training data.
We design a domainaware triplet loss for domain generalization to help the model to cluster similar semantic features.
Our algorithm is designed to disperse domain information in the embedding space.
arXiv Detail & Related papers (2023-03-01T14:02:01Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Cross-domain Object Detection through Coarse-to-Fine Feature Adaptation [62.29076080124199]
This paper proposes a novel coarse-to-fine feature adaptation approach to cross-domain object detection.
At the coarse-grained stage, foreground regions are extracted by adopting the attention mechanism, and aligned according to their marginal distributions.
At the fine-grained stage, we conduct conditional distribution alignment of foregrounds by minimizing the distance of global prototypes with the same category but from different domains.
arXiv Detail & Related papers (2020-03-23T13:40:06Z) - Improve Unsupervised Domain Adaptation with Mixup Training [18.329571222689562]
We study the problem of utilizing a relevant source domain with abundant labels to build predictive modeling for an unannotated target domain.
Recent work observe that the popular adversarial approach of learning domain-invariant features is insufficient to achieve desirable target domain performance.
We propose to enforce training constraints across domains using mixup formulation to directly address the generalization performance for target data.
arXiv Detail & Related papers (2020-01-03T01:21:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.