ImagiNet: A Multi-Content Dataset for Generalizable Synthetic Image Detection via Contrastive Learning
- URL: http://arxiv.org/abs/2407.20020v1
- Date: Mon, 29 Jul 2024 13:57:24 GMT
- Title: ImagiNet: A Multi-Content Dataset for Generalizable Synthetic Image Detection via Contrastive Learning
- Authors: Delyan Boychev, Radostin Cholakov,
- Abstract summary: Generative models produce images with a level of authenticity nearly indistinguishable from real photos and artwork.
The difficulty of identifying synthetic images leaves online media platforms vulnerable to impersonation and misinformation attempts.
We introduce ImagiNet, a high-resolution and balanced dataset for synthetic image detection.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models, such as diffusion models (DMs), variational autoencoders (VAEs), and generative adversarial networks (GANs), produce images with a level of authenticity that makes them nearly indistinguishable from real photos and artwork. While this capability is beneficial for many industries, the difficulty of identifying synthetic images leaves online media platforms vulnerable to impersonation and misinformation attempts. To support the development of defensive methods, we introduce ImagiNet, a high-resolution and balanced dataset for synthetic image detection, designed to mitigate potential biases in existing resources. It contains 200K examples, spanning four content categories: photos, paintings, faces, and uncategorized. Synthetic images are produced with open-source and proprietary generators, whereas real counterparts of the same content type are collected from public datasets. The structure of ImagiNet allows for a two-track evaluation system: i) classification as real or synthetic and ii) identification of the generative model. To establish a baseline, we train a ResNet-50 model using a self-supervised contrastive objective (SelfCon) for each track. The model demonstrates state-of-the-art performance and high inference speed across established benchmarks, achieving an AUC of up to 0.99 and balanced accuracy ranging from 86% to 95%, even under social network conditions that involve compression and resizing. Our data and code are available at https://github.com/delyan-boychev/imaginet.
Related papers
- CO-SPY: Combining Semantic and Pixel Features to Detect Synthetic Images by AI [58.35348718345307]
Current efforts to distinguish between real and AI-generated images may lack generalization.
We propose a novel framework, Co-Spy, that first enhances existing semantic features.
We also create Co-Spy-Bench, a comprehensive dataset comprising 5 real image datasets and 22 state-of-the-art generative models.
arXiv Detail & Related papers (2025-03-24T01:59:29Z) - Re-assessing ImageNet: How aligned is its single-label assumption with its multi-label nature? [1.4828022319975973]
We analyze the effectiveness of pre-trained state-of-the-art deep neural network (DNN) models on ImageNet and one of its variants, ImageNetV2.
Our findings show that these reported declines are largely attributable to a characteristic of the dataset that has not received sufficient attention.
Our findings highlight the importance of considering the multi-label nature of the ImageNet dataset during benchmarking.
arXiv Detail & Related papers (2024-12-24T12:55:31Z) - Low-Biased General Annotated Dataset Generation [62.04202037186855]
We present a low-biased general annotated dataset generation framework (lbGen)
Instead of expensive manual collection, we aim at directly generating low-biased images with category annotations.
Experimental results confirm that, compared with the manually labeled dataset or other synthetic datasets, the utilization of our generated low-biased dataset leads to stable generalization capacity enhancement.
arXiv Detail & Related papers (2024-12-14T13:28:40Z) - Visual Car Brand Classification by Implementing a Synthetic Image Dataset Creation Pipeline [3.524869467682149]
We propose an automatic pipeline for generating synthetic image datasets using Stable Diffusion.
We leverage YOLOv8 for automatic bounding box detection and quality assessment of synthesized images.
arXiv Detail & Related papers (2024-06-03T07:44:08Z) - SIDBench: A Python Framework for Reliably Assessing Synthetic Image Detection Methods [9.213926755375024]
The creation of completely synthetic images presents a unique challenge.
There is often a large gap between experimental results on benchmark datasets and the performance of methods in the wild.
This paper introduces a benchmarking framework that integrates several state-of-the-art SID models.
arXiv Detail & Related papers (2024-04-29T09:50:16Z) - ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object [78.58860252442045]
We introduce generative model as a data source for hard images that benchmark deep models' robustness.
We are able to generate images with more diversified backgrounds, textures, and materials than any prior work, where we term this benchmark as ImageNet-D.
Our work suggests that diffusion models can be an effective source to test vision models.
arXiv Detail & Related papers (2024-03-27T17:23:39Z) - Leveraging Representations from Intermediate Encoder-blocks for Synthetic Image Detection [13.840950434728533]
State-of-the-art Synthetic Image Detection (SID) research has led to strong evidence on the advantages of feature extraction from foundation models.
We leverage the image representations extracted by intermediate Transformer blocks of CLIP's image-encoder via a lightweight network.
Our method is compared against the state-of-the-art by evaluating it on 20 test datasets and exhibits an average +10.6% absolute performance improvement.
arXiv Detail & Related papers (2024-02-29T12:18:43Z) - Unlocking Pre-trained Image Backbones for Semantic Image Synthesis [29.688029979801577]
We propose a new class of GAN discriminators for semantic image synthesis that generates highly realistic images.
Our model, which we dub DP-SIMS, achieves state-of-the-art results in terms of image quality and consistency with the input label maps on ADE-20K, COCO-Stuff, and Cityscapes.
arXiv Detail & Related papers (2023-12-20T09:39:19Z) - On quantifying and improving realism of images generated with diffusion [50.37578424163951]
We propose a metric, called Image Realism Score (IRS), computed from five statistical measures of a given image.
IRS is easily usable as a measure to classify a given image as real or fake.
We experimentally establish the model- and data-agnostic nature of the proposed IRS by successfully detecting fake images generated by Stable Diffusion Model (SDM), Dalle2, Midjourney and BigGAN.
Our efforts have also led to Gen-100 dataset, which provides 1,000 samples for 100 classes generated by four high-quality models.
arXiv Detail & Related papers (2023-09-26T08:32:55Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Explore the Power of Synthetic Data on Few-shot Object Detection [27.26215175101865]
Few-shot object detection (FSOD) aims to expand an object detector for novel categories given only a few instances for training.
Recent text-to-image generation models have shown promising results in generating high-quality images.
This work extensively studies how synthetic images generated from state-of-the-art text-to-image generators benefit FSOD tasks.
arXiv Detail & Related papers (2023-03-23T12:34:52Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Is synthetic data from generative models ready for image recognition? [69.42645602062024]
We study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks.
We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks.
arXiv Detail & Related papers (2022-10-14T06:54:24Z) - A Shared Representation for Photorealistic Driving Simulators [83.5985178314263]
We propose to improve the quality of generated images by rethinking the discriminator architecture.
The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses.
We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.
arXiv Detail & Related papers (2021-12-09T18:59:21Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Generative Zero-shot Network Quantization [41.75769117366117]
Convolutional neural networks are able to learn realistic image priors from numerous training samples in low-level image generation and restoration.
We show that, for high-level image recognition tasks, we can further reconstruct "realistic" images of each category by leveraging intrinsic Batch Normalization (BN) statistics without any training data.
arXiv Detail & Related papers (2021-01-21T04:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.