Spritz-PS: Validation of Synthetic Face Images Using a Large Dataset of
Printed Documents
- URL: http://arxiv.org/abs/2304.02982v1
- Date: Thu, 6 Apr 2023 10:28:34 GMT
- Title: Spritz-PS: Validation of Synthetic Face Images Using a Large Dataset of
Printed Documents
- Authors: Ehsan Nowroozi, Yoosef Habibi, Mauro Conti
- Abstract summary: We provide a novel dataset made up of a large number of synthetic and natural printed IRISes taken from VIPPrint Printed and Scanned face images.
To highlight the problems involved with the evaluation of the dataset's IRIS images, we conducted a large number of analyses employing Siamese Neural Networks.
- Score: 23.388645531702597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The capability of doing effective forensic analysis on printed and scanned
(PS) images is essential in many applications. PS documents may be used to
conceal the artifacts of images which is due to the synthetic nature of images
since these artifacts are typically present in manipulated images and the main
artifacts in the synthetic images can be removed after the PS. Due to the
appeal of Generative Adversarial Networks (GANs), synthetic face images
generated with GANs models are difficult to differentiate from genuine human
faces and may be used to create counterfeit identities. Additionally, since
GANs models do not account for physiological constraints for generating human
faces and their impact on human IRISes, distinguishing genuine from synthetic
IRISes in the PS scenario becomes extremely difficult. As a result of the lack
of large-scale reference IRIS datasets in the PS scenario, we aim at developing
a novel dataset to become a standard for Multimedia Forensics (MFs)
investigation which is available at [45]. In this paper, we provide a novel
dataset made up of a large number of synthetic and natural printed IRISes taken
from VIPPrint Printed and Scanned face images. We extracted irises from face
images and it is possible that the model due to eyelid occlusion captured the
incomplete irises. To fill the missing pixels of extracted iris, we applied
techniques to discover the complex link between the iris images. To highlight
the problems involved with the evaluation of the dataset's IRIS images, we
conducted a large number of analyses employing Siamese Neural Networks to
assess the similarities between genuine and synthetic human IRISes, such as
ResNet50, Xception, VGG16, and MobileNet-v2. For instance, using the Xception
network, we achieved 56.76\% similarity of IRISes for synthetic images and
92.77% similarity of IRISes for real images.
Related papers
- When Synthetic Traces Hide Real Content: Analysis of Stable Diffusion Image Laundering [18.039034362749504]
In recent years, methods for producing highly realistic synthetic images have significantly advanced.
It is possible to pass an image through SD autoencoders to reproduce a synthetic copy of the image with high realism and almost no visual artifacts.
This process, known as SD image laundering, can transform real images into lookalike synthetic ones and risks complicating forensic analysis for content authenticity verification.
arXiv Detail & Related papers (2024-07-15T14:01:35Z) - ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real
Novel View Synthesis via Contrastive Learning [102.46382882098847]
We first investigate the effects of synthetic data in synthetic-to-real novel view synthesis.
We propose to introduce geometry-aware contrastive learning to learn multi-view consistent features with geometric constraints.
Our method can render images with higher quality and better fine-grained details, outperforming existing generalizable novel view synthesis methods in terms of PSNR, SSIM, and LPIPS.
arXiv Detail & Related papers (2023-03-20T12:06:14Z) - Is synthetic data from generative models ready for image recognition? [69.42645602062024]
We study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks.
We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks.
arXiv Detail & Related papers (2022-10-14T06:54:24Z) - Detecting High-Quality GAN-Generated Face Images using Neural Networks [23.388645531702597]
We propose a new strategy to differentiate GAN-generated images from authentic images by leveraging spectral band discrepancies.
In particular, we enable the digital preservation of face images using the Cross-band co-occurrence matrix and spatial co-occurrence matrix.
We show that the performance boost is particularly significant and achieves more than 92% in different post-processing environments.
arXiv Detail & Related papers (2022-03-03T13:53:27Z) - Generation of Non-Deterministic Synthetic Face Datasets Guided by
Identity Priors [19.095368725147367]
We propose a non-deterministic method for generating mated face images by exploiting the well-structured latent space of StyleGAN.
We create a new dataset of synthetic face images (SymFace) consisting of 77,034 samples including 25,919 synthetic IDs.
arXiv Detail & Related papers (2021-12-07T11:08:47Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Identity-Aware CycleGAN for Face Photo-Sketch Synthesis and Recognition [61.87842307164351]
We first propose an Identity-Aware CycleGAN (IACycleGAN) model that applies a new perceptual loss to supervise the image generation network.
It improves CycleGAN on photo-sketch synthesis by paying more attention to the synthesis of key facial regions, such as eyes and nose.
We develop a mutual optimization procedure between the synthesis model and the recognition model, which iteratively synthesizes better images by IACycleGAN.
arXiv Detail & Related papers (2021-03-30T01:30:08Z) - VIPPrint: A Large Scale Dataset of Printed and Scanned Images for
Synthetic Face Images Detection and Source Linking [26.02960434287235]
We present a new dataset composed of a large number of synthetic and natural printed face images.
We verify that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images.
arXiv Detail & Related papers (2021-02-01T13:00:29Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.