Towards Exploring Fairness in Visual Transformer based Natural and GAN Image Detection Systems
- URL: http://arxiv.org/abs/2310.12076v2
- Date: Sat, 16 Nov 2024 17:40:06 GMT
- Title: Towards Exploring Fairness in Visual Transformer based Natural and GAN Image Detection Systems
- Authors: Manjary P. Gangan, Anoop Kadan, Lajish V L,
- Abstract summary: This study explores bias in visual transformer based image forensic algorithms that classify natural and GAN images.
The proposed study procures bias evaluation corpora to analyze bias in gender, racial, affective, and intersectional domains.
It also analyzes the impact of image compression on model bias.
- Score: 0.0
- License:
- Abstract: Image forensics research has recently witnessed a lot of advancements towards developing computational models capable of accurately detecting natural images captured by cameras and GAN generated images. However, it is also important to ensure whether these computational models are fair enough and do not produce biased outcomes that could eventually harm certain societal groups or cause serious security threats. Exploring fairness in image forensic algorithms is an initial step towards mitigating these biases. This study explores bias in visual transformer based image forensic algorithms that classify natural and GAN images, since visual transformers are recently being widely used in image classification based tasks, including in the area of image forensics. The proposed study procures bias evaluation corpora to analyze bias in gender, racial, affective, and intersectional domains using a wide set of individual and pairwise bias evaluation measures. Since the robustness of the algorithms against image compression is an important factor to be considered in forensic tasks, this study also analyzes the impact of image compression on model bias. Hence to study the impact of image compression on model bias, a two-phase evaluation setting is followed, where the experiments are carried out in uncompressed and compressed evaluation settings. The study could identify bias existences in the visual transformer based models distinguishing natural and GAN images, and also observes that image compression impacts model biases, predominantly amplifying the presence of biases in class GAN predictions.
Related papers
- Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - On the exploitation of DCT statistics for cropping detectors [5.039808715733204]
In this work, we investigated a novel image resolution classifier that employs DCT statistics with the goal to detect the original resolution of images.
The results demonstrate the classifier's reliability in distinguishing between cropped and not cropped images, providing a dependable estimation of their original resolution.
This work opens new perspectives in the field, with potential to transform image analysis and usage across multiple domains.
arXiv Detail & Related papers (2024-03-21T19:05:31Z) - Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Towards objective and systematic evaluation of bias in artificial intelligence for medical imaging [2.0890189482817165]
We introduce a novel analysis framework for investigating the impact of biases in medical images on AI models.
We developed and tested this framework for conducting controlled in silico trials to assess bias in medical imaging AI.
arXiv Detail & Related papers (2023-11-03T01:37:28Z) - A Robust Image Forensic Framework Utilizing Multi-Colorspace Enriched Vision Transformer for Distinguishing Natural and Computer-Generated Images [0.0]
We propose a robust forensic classifier framework leveraging enriched vision transformers to distinguish between natural and generated images.
Our approach outperforms baselines, demonstrating 94.25% test accuracy with significant performance gains in individual class accuracies.
This work advances the state-of-the-art in image forensics by providing a generalized and resilient solution to distinguish between natural and generated images.
arXiv Detail & Related papers (2023-08-14T17:11:17Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Evaluating and Mitigating Bias in Image Classifiers: A Causal
Perspective Using Counterfactuals [27.539001365348906]
We present a method for generating counterfactuals by incorporating a structural causal model (SCM) in an improved variant of Adversarially Learned Inference (ALI)
We show how to explain a pre-trained machine learning classifier, evaluate its bias, and mitigate the bias using a counterfactual regularizer.
arXiv Detail & Related papers (2020-09-17T13:19:31Z) - Understanding Adversarial Examples from the Mutual Influence of Images
and Perturbations [83.60161052867534]
We analyze adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their influence on each other.
Our results suggest a new perspective towards the relationship between images and universal perturbations.
We are the first to achieve the challenging task of a targeted universal attack without utilizing original training data.
arXiv Detail & Related papers (2020-07-13T05:00:09Z) - InsideBias: Measuring Bias in Deep Networks and Application to Face
Gender Biometrics [73.85525896663371]
This work explores the biases in learning processes based on deep neural network architectures.
We employ two gender detection models based on popular deep neural networks.
We propose InsideBias, a novel method to detect biased models.
arXiv Detail & Related papers (2020-04-14T15:20:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.