Exploring Fairness in Pre-trained Visual Transformer based Natural and
GAN Generated Image Detection Systems and Understanding the Impact of Image
Compression in Fairness
- URL: http://arxiv.org/abs/2310.12076v1
- Date: Wed, 18 Oct 2023 16:13:22 GMT
- Title: Exploring Fairness in Pre-trained Visual Transformer based Natural and
GAN Generated Image Detection Systems and Understanding the Impact of Image
Compression in Fairness
- Authors: Manjary P. Gangan, Anoop Kadan, and Lajish V L
- Abstract summary: This study tries to explore bias in the transformer based image forensic algorithms that classify natural and GAN generated images.
By procuring a bias evaluation corpora, this study analyzes bias in gender, racial, affective, and intersectional domains.
As the generalizability of the algorithms against image compression is an important factor to be considered in forensic tasks, this study also analyzes the role of image compression on model bias.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: It is not only sufficient to construct computational models that can
accurately classify or detect fake images from real images taken from a camera,
but it is also important to ensure whether these computational models are fair
enough or produce biased outcomes that can eventually harm certain social
groups or cause serious security threats. Exploring fairness in forensic
algorithms is an initial step towards correcting these biases. Since visual
transformers are recently being widely used in most image classification based
tasks due to their capability to produce high accuracies, this study tries to
explore bias in the transformer based image forensic algorithms that classify
natural and GAN generated images. By procuring a bias evaluation corpora, this
study analyzes bias in gender, racial, affective, and intersectional domains
using a wide set of individual and pairwise bias evaluation measures. As the
generalizability of the algorithms against image compression is an important
factor to be considered in forensic tasks, this study also analyzes the role of
image compression on model bias. Hence to study the impact of image compression
on model bias, a two phase evaluation setting is followed, where a set of
experiments is carried out in the uncompressed evaluation setting and the other
in the compressed evaluation setting.
Related papers
- Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - On the exploitation of DCT statistics for cropping detectors [5.039808715733204]
In this work, we investigated a novel image resolution classifier that employs DCT statistics with the goal to detect the original resolution of images.
The results demonstrate the classifier's reliability in distinguishing between cropped and not cropped images, providing a dependable estimation of their original resolution.
This work opens new perspectives in the field, with potential to transform image analysis and usage across multiple domains.
arXiv Detail & Related papers (2024-03-21T19:05:31Z) - Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Towards objective and systematic evaluation of bias in artificial intelligence for medical imaging [2.0890189482817165]
We introduce a novel analysis framework for investigating the impact of biases in medical images on AI models.
We developed and tested this framework for conducting controlled in silico trials to assess bias in medical imaging AI.
arXiv Detail & Related papers (2023-11-03T01:37:28Z) - A Robust Image Forensic Framework Utilizing Multi-Colorspace Enriched Vision Transformer for Distinguishing Natural and Computer-Generated Images [0.0]
We propose a robust forensic classifier framework leveraging enriched vision transformers to distinguish between natural and generated images.
Our approach outperforms baselines, demonstrating 94.25% test accuracy with significant performance gains in individual class accuracies.
This work advances the state-of-the-art in image forensics by providing a generalized and resilient solution to distinguish between natural and generated images.
arXiv Detail & Related papers (2023-08-14T17:11:17Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Evaluating and Mitigating Bias in Image Classifiers: A Causal
Perspective Using Counterfactuals [27.539001365348906]
We present a method for generating counterfactuals by incorporating a structural causal model (SCM) in an improved variant of Adversarially Learned Inference (ALI)
We show how to explain a pre-trained machine learning classifier, evaluate its bias, and mitigate the bias using a counterfactual regularizer.
arXiv Detail & Related papers (2020-09-17T13:19:31Z) - Understanding Adversarial Examples from the Mutual Influence of Images
and Perturbations [83.60161052867534]
We analyze adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their influence on each other.
Our results suggest a new perspective towards the relationship between images and universal perturbations.
We are the first to achieve the challenging task of a targeted universal attack without utilizing original training data.
arXiv Detail & Related papers (2020-07-13T05:00:09Z) - InsideBias: Measuring Bias in Deep Networks and Application to Face
Gender Biometrics [73.85525896663371]
This work explores the biases in learning processes based on deep neural network architectures.
We employ two gender detection models based on popular deep neural networks.
We propose InsideBias, a novel method to detect biased models.
arXiv Detail & Related papers (2020-04-14T15:20:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.