Controllable and Guided Face Synthesis for Unconstrained Face
Recognition
- URL: http://arxiv.org/abs/2207.10180v1
- Date: Wed, 20 Jul 2022 20:13:29 GMT
- Title: Controllable and Guided Face Synthesis for Unconstrained Face
Recognition
- Authors: Feng Liu, Minchul Kim, Anil Jain, and Xiaoming Liu
- Abstract summary: We propose a controllable face synthesis model (CFSM) that can mimic the distribution of target datasets in a style latent space.
CFSM learns a linear subspace with orthogonal bases in the style latent space with precise control over the diversity and degree of synthesis.
Our approach yields significant performance gains on unconstrained benchmarks, such as IJB-B, IJB-C, TinyFace and IJB-S.
- Score: 17.08390901848988
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although significant advances have been made in face recognition (FR), FR in
unconstrained environments remains challenging due to the domain gap between
the semi-constrained training datasets and unconstrained testing scenarios. To
address this problem, we propose a controllable face synthesis model (CFSM)
that can mimic the distribution of target datasets in a style latent space.
CFSM learns a linear subspace with orthogonal bases in the style latent space
with precise control over the diversity and degree of synthesis. Furthermore,
the pre-trained synthesis model can be guided by the FR model, making the
resulting images more beneficial for FR model training. Besides, target dataset
distributions are characterized by the learned orthogonal bases, which can be
utilized to measure the distributional similarity among face datasets. Our
approach yields significant performance gains on unconstrained benchmarks, such
as IJB-B, IJB-C, TinyFace and IJB-S (+5.76% Rank1).
Related papers
- CF-GO-Net: A Universal Distribution Learner via Characteristic Function Networks with Graph Optimizers [8.816637789605174]
We introduce an approach which employs the characteristic function (CF), a probabilistic descriptor that directly corresponds to the distribution.
Unlike the probability density function (pdf), the characteristic function not only always exists, but also provides an additional degree of freedom.
Our method allows the use of a pre-trained model, such as a well-trained autoencoder, and is capable of learning directly in its feature space.
arXiv Detail & Related papers (2024-09-19T09:33:12Z) - Distributional Black-Box Model Inversion Attack with Multi-Agent Reinforcement Learning [19.200221582814518]
This paper proposes a novel Distributional Black-Box Model Inversion (DBB-MI) attack by constructing the probabilistic latent space for searching the target privacy data.
As the latent probability distribution closely aligns with the target privacy data in latent space, the recovered data will leak the privacy of training samples of the target model significantly.
Experiments conducted on diverse datasets and networks show that the present DBB-MI has better performance than state-of-the-art in attack accuracy, K-nearest neighbor feature distance, and Peak Signal-to-Noise Ratio.
arXiv Detail & Related papers (2024-04-22T04:18:38Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - Towards Continual Learning Desiderata via HSIC-Bottleneck
Orthogonalization and Equiangular Embedding [55.107555305760954]
We propose a conceptually simple yet effective method that attributes forgetting to layer-wise parameter overwriting and the resulting decision boundary distortion.
Our method achieves competitive accuracy performance, even with absolute superiority of zero exemplar buffer and 1.02x the base model.
arXiv Detail & Related papers (2024-01-17T09:01:29Z) - Bridging the Gap: Heterogeneous Face Recognition with Conditional
Adaptive Instance Modulation [7.665392786787577]
We introduce a novel Conditional Adaptive Instance Modulation (CAIM) module that can be integrated into pre-trained Face Recognition networks.
The CAIM block modulates intermediate feature maps, to adapt the style of the target modality effectively bridging the domain gap.
Our proposed method allows for end-to-end training with a minimal number of paired samples.
arXiv Detail & Related papers (2023-07-13T19:17:04Z) - Exploiting Temporal Structures of Cyclostationary Signals for
Data-Driven Single-Channel Source Separation [98.95383921866096]
We study the problem of single-channel source separation (SCSS)
We focus on cyclostationary signals, which are particularly suitable in a variety of application domains.
We propose a deep learning approach using a U-Net architecture, which is competitive with the minimum MSE estimator.
arXiv Detail & Related papers (2022-08-22T14:04:56Z) - Switchable Representation Learning Framework with Self-compatibility [50.48336074436792]
We propose a Switchable representation learning Framework with Self-Compatibility (SFSC)
SFSC generates a series of compatible sub-models with different capacities through one training process.
SFSC achieves state-of-the-art performance on the evaluated datasets.
arXiv Detail & Related papers (2022-06-16T16:46:32Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.