Characteristics and prevalence of fake social media profiles with AI-generated faces
- URL: http://arxiv.org/abs/2401.02627v2
- Date: Thu, 4 Jul 2024 00:30:41 GMT
- Title: Characteristics and prevalence of fake social media profiles with AI-generated faces
- Authors: Kai-Cheng Yang, Danishjeet Singh, Filippo Menczer,
- Abstract summary: Recent advancements in generative artificial intelligence (AI) have raised concerns about their potential to create convincing fake social media accounts.
We present a systematic analysis of Twitter accounts using human faces generated by Generative Adrial Networks (GANs) for their profile pictures.
We show that they are used to spread scams, spam, and amplify coordinated messages, among other inauthentic activities.
- Score: 7.444681337745949
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in generative artificial intelligence (AI) have raised concerns about their potential to create convincing fake social media accounts, but empirical evidence is lacking. In this paper, we present a systematic analysis of Twitter (X) accounts using human faces generated by Generative Adversarial Networks (GANs) for their profile pictures. We present a dataset of 1,420 such accounts and show that they are used to spread scams, spam, and amplify coordinated messages, among other inauthentic activities. Leveraging a feature of GAN-generated faces -- consistent eye placement -- and supplementing it with human annotation, we devise an effective method for identifying GAN-generated profiles in the wild. Applying this method to a random sample of active Twitter users, we estimate a lower bound for the prevalence of profiles using GAN-generated faces between 0.021% and 0.044% -- around 10K daily active accounts. These findings underscore the emerging threats posed by multimodal generative AI. We release the source code of our detection method and the data we collect to facilitate further investigation. Additionally, we provide practical heuristics to assist social media users in recognizing such accounts.
Related papers
- Evolving from Single-modal to Multi-modal Facial Deepfake Detection: A Survey [40.11614155244292]
As AI-generated media become more realistic, the risk of misuse to spread misinformation and commit identity fraud increases.
This work traces the evolution from traditional single-modality methods to sophisticated multi-modal approaches that handle audio-visual and text-visual scenarios.
To our knowledge, this is the first survey of its kind.
arXiv Detail & Related papers (2024-06-11T05:48:04Z) - AI-Face: A Million-Scale Demographically Annotated AI-Generated Face Dataset and Fairness Benchmark [12.368133562194267]
We introduce the AI-Face dataset, the first million-scale demographically annotated AI-generated face image dataset.
Based on this dataset, we conduct the first comprehensive fairness benchmark to assess various AI face detectors.
arXiv Detail & Related papers (2024-06-02T15:51:33Z) - AI-Generated Faces in the Real World: A Large-Scale Case Study of Twitter Profile Images [26.891299948581782]
We conduct the first large-scale investigation of the prevalence of AI-generated profile pictures on Twitter.
Our analysis of nearly 15 million Twitter profile pictures shows that 0.052% were artificially generated, confirming their notable presence on the platform.
The results also reveal several motives, including spamming and political amplification campaigns.
arXiv Detail & Related papers (2024-04-22T14:57:17Z) - GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - My Face My Choice: Privacy Enhancing Deepfakes for Social Media
Anonymization [4.725675279167593]
We introduce three face access models in a hypothetical social network, where the user has the power to only appear in photos they approve.
Our approach eclipses current tagging systems and replaces unapproved faces with quantitatively dissimilar deepfakes.
Running seven SOTA face recognizers on our results, MFMC reduces the average accuracy by 61%.
arXiv Detail & Related papers (2022-11-02T17:58:20Z) - Detecting fake accounts through Generative Adversarial Network in online
social media [0.0]
This paper proposes a novel method using user similarity measures and the Generative Adversarial Network (GAN) algorithm to identify fake user accounts in the Twitter dataset.
Despite the problem's complexity, the method achieves an AUC rate of 80% in classifying and detecting fake accounts.
arXiv Detail & Related papers (2022-10-25T10:20:27Z) - Open-Eye: An Open Platform to Study Human Performance on Identifying
AI-Synthesized Faces [51.56417104929796]
We develop an online platform called Open-eye to study the human performance of AI-synthesized faces detection.
We describe the design and workflow of the Open-eye in this paper.
arXiv Detail & Related papers (2022-05-13T14:30:59Z) - Generating Master Faces for Dictionary Attacks with a Network-Assisted
Latent Space Evolution [68.8204255655161]
A master face is a face image that passes face-based identity-authentication for a large portion of the population.
We optimize these faces, by using an evolutionary algorithm in the latent embedding space of the StyleGAN face generator.
arXiv Detail & Related papers (2021-08-01T12:55:23Z) - Face Forensics in the Wild [121.23154918448618]
We construct a novel large-scale dataset, called FFIW-10K, which comprises 10,000 high-quality forgery videos.
The manipulation procedure is fully automatic, controlled by a domain-adversarial quality assessment network.
In addition, we propose a novel algorithm to tackle the task of multi-person face forgery detection.
arXiv Detail & Related papers (2021-03-30T05:06:19Z) - Responsible Disclosure of Generative Models Using Scalable
Fingerprinting [70.81987741132451]
Deep generative models have achieved a qualitatively new level of performance.
There are concerns on how this technology can be misused to spoof sensors, generate deep fakes, and enable misinformation at scale.
Our work enables a responsible disclosure of such state-of-the-art generative models, that allows researchers and companies to fingerprint their models.
arXiv Detail & Related papers (2020-12-16T03:51:54Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.