How Spammers and Scammers Leverage AI-Generated Images on Facebook for Audience Growth
- URL: http://arxiv.org/abs/2403.12838v1
- Date: Tue, 19 Mar 2024 15:43:16 GMT
- Title: How Spammers and Scammers Leverage AI-Generated Images on Facebook for Audience Growth
- Authors: Renee DiResta, Josh A. Goldstein,
- Abstract summary: We show that spammers and scammers are already using AI-generated images to gain significant traction on Facebook.
At times, the Facebook Feed is recommending unlabeled AI-generated images to users who neither follow the Pages posting the images nor realize that the images are AI-generated.
- Score: 0.9865722130817715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Much of the research and discourse on risks from artificial intelligence (AI) image generators, such as DALL-E and Midjourney, has centered around whether they could be used to inject false information into political discourse. We show that spammers and scammers - seemingly motivated by profit or clout, not ideology - are already using AI-generated images to gain significant traction on Facebook. At times, the Facebook Feed is recommending unlabeled AI-generated images to users who neither follow the Pages posting the images nor realize that the images are AI-generated, highlighting the need for improved transparency and provenance standards as AI models proliferate.
Related papers
- Could AI Trace and Explain the Origins of AI-Generated Images and Text? [53.11173194293537]
AI-generated content is increasingly prevalent in the real world.
adversaries might exploit large multimodal models to create images that violate ethical or legal standards.
Paper reviewers may misuse large language models to generate reviews without genuine intellectual effort.
arXiv Detail & Related papers (2025-04-05T20:51:54Z) - Threats and Opportunities in AI-generated Images for Armed Forces [0.0]
Recent advancements in the field of generative Artificial Intelligence (AI) to synthesize images give rise to several new challenges for armed forces.
The objective of this report is to investigate the role of AI-generated images for armed forces and provide an overview on opportunities and threats.
arXiv Detail & Related papers (2025-03-31T13:46:02Z) - DejAIvu: Identifying and Explaining AI Art on the Web in Real-Time with Saliency Maps [0.0]
We introduce DejAIvu, a Chrome Web extension that combines real-time AI-generated image detection with saliency-based explainability.
Our approach integrates efficient in-browser inference, gradient-based saliency analysis, and a seamless user experience, ensuring that AI detection is both transparent and interpretable.
arXiv Detail & Related papers (2025-02-12T22:24:49Z) - MiRAGeNews: Multimodal Realistic AI-Generated News Detection [45.067211436589126]
We propose the MiRAGeNews dataset to combat the spread of AI-generated fake news.
Our dataset poses a significant challenge to humans.
We train a multi-modal detector that improves by +5.1% F-1 over state-of-the-art baselines.
arXiv Detail & Related papers (2024-10-11T17:58:02Z) - AI-rays: Exploring Bias in the Gaze of AI Through a Multimodal Interactive Installation [7.939652622988465]
We introduce AI-rays, an interactive installation where AI generates speculative identities from participants' appearance.
It uses speculative X-ray visions to contrast reality with AI-generated assumptions, metaphorically highlighting AI's scrutiny and biases.
arXiv Detail & Related papers (2024-10-03T18:44:05Z) - A Sanity Check for AI-generated Image Detection [49.08585395873425]
We present a sanity check on whether the task of AI-generated image detection has been solved.
To quantify the generalization of existing methods, we evaluate 9 off-the-shelf AI-generated image detectors on Chameleon dataset.
We propose AIDE (AI-generated Image DEtector with Hybrid Features), which leverages multiple experts to simultaneously extract visual artifacts and noise patterns.
arXiv Detail & Related papers (2024-06-27T17:59:49Z) - Invisible Relevance Bias: Text-Image Retrieval Models Prefer AI-Generated Images [67.18010640829682]
We show that AI-generated images introduce an invisible relevance bias to text-image retrieval models.
The inclusion of AI-generated images in the training data of the retrieval models exacerbates the invisible relevance bias.
We propose an effective training method aimed at alleviating the invisible relevance bias.
arXiv Detail & Related papers (2023-11-23T16:22:58Z) - Finding AI-Generated Faces in the Wild [9.390562437823078]
We focus on a more narrow task of distinguishing a real face from an AI-generated face.
This is particularly applicable when tackling inauthentic online accounts with a fake user profile photo.
We show that by focusing on only faces, a more resilient and general-purpose artifact can be detected.
arXiv Detail & Related papers (2023-11-14T22:46:01Z) - AI-Generated Images as Data Source: The Dawn of Synthetic Era [61.879821573066216]
generative AI has unlocked the potential to create synthetic images that closely resemble real-world photographs.
This paper explores the innovative concept of harnessing these AI-generated images as new data sources.
In contrast to real data, AI-generated data exhibit remarkable advantages, including unmatched abundance and scalability.
arXiv Detail & Related papers (2023-10-03T06:55:19Z) - BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models [54.19289900203071]
The rise in popularity of text-to-image generative artificial intelligence has attracted widespread public interest.
We demonstrate that this technology can be attacked to generate content that subtly manipulates its users.
We propose a Backdoor Attack on text-to-image Generative Models (BAGM)
Our attack is the first to target three popular text-to-image generative models across three stages of the generative process.
arXiv Detail & Related papers (2023-07-31T08:34:24Z) - Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images [66.20578637253831]
There is a growing concern that the advancement of artificial intelligence (AI) technology may produce fake photos.
This study aims to comprehensively evaluate agents for distinguishing state-of-the-art AI-generated visual content.
arXiv Detail & Related papers (2023-04-25T17:51:59Z) - Open-Eye: An Open Platform to Study Human Performance on Identifying
AI-Synthesized Faces [51.56417104929796]
We develop an online platform called Open-eye to study the human performance of AI-synthesized faces detection.
We describe the design and workflow of the Open-eye in this paper.
arXiv Detail & Related papers (2022-05-13T14:30:59Z) - Explainable AI for Natural Adversarial Images [4.387699521196243]
Humans tend to assume that the AI's decision process mirrors their own.
Here we evaluate if methods from explainable AI can disrupt this assumption to help participants predict AI classifications for adversarial and standard images.
We find that both saliency maps and examples facilitate catching AI errors, but their effects are not additive, and saliency maps are more effective than examples.
arXiv Detail & Related papers (2021-06-16T20:19:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.