REVEAL -- Reasoning and Evaluation of Visual Evidence through Aligned Language
- URL: http://arxiv.org/abs/2508.12543v2
- Date: Mon, 08 Sep 2025 07:14:44 GMT
- Title: REVEAL -- Reasoning and Evaluation of Visual Evidence through Aligned Language
- Authors: Ipsita Praharaj, Yukta Butala, Badrikanath Praharaj, Yash Butala,
- Abstract summary: We frame this problem of forgery detection as a prompt-driven visual reasoning task, leveraging the semantic alignment capabilities of large vision-language models.<n>We propose two approaches - (1) Holistic Scene-level Evaluation that relies on the physics, semantics, perspective, and realism of the image as a whole and (2) Region-wise anomaly detection that splits the image into multiple regions and analyzes each of them.
- Score: 0.1388281922732496
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid advancement of generative models has intensified the challenge of detecting and interpreting visual forgeries, necessitating robust frameworks for image forgery detection while providing reasoning as well as localization. While existing works approach this problem using supervised training for specific manipulation or anomaly detection in the embedding space, generalization across domains remains a challenge. We frame this problem of forgery detection as a prompt-driven visual reasoning task, leveraging the semantic alignment capabilities of large vision-language models. We propose a framework, `REVEAL` (Reasoning and Evaluation of Visual Evidence through Aligned Language), that incorporates generalized guidelines. We propose two tangential approaches - (1) Holistic Scene-level Evaluation that relies on the physics, semantics, perspective, and realism of the image as a whole and (2) Region-wise anomaly detection that splits the image into multiple regions and analyzes each of them. We conduct experiments over datasets from different domains (Photoshop, DeepFake and AIGC editing). We compare the Vision Language Models against competitive baselines and analyze the reasoning provided by them.
Related papers
- RegionReasoner: Region-Grounded Multi-Round Visual Reasoning [69.75509909581133]
RegionReasoner is a reinforcement learning framework for visual reasoning.<n>It enforces grounded reasoning by requiring each reasoning trace to explicitly cite the corresponding reference bounding boxes.<n>RegionReasoner is optimized with structured rewards combining grounding fidelity and global-local semantic alignment.
arXiv Detail & Related papers (2026-02-03T16:52:16Z) - Vision Language Models Are Not (Yet) Spelling Correctors [0.742779257315787]
Spelling correction from visual input poses unique challenges for vision language models (VLMs)<n>We present ReViCo, the first benchmark that systematically evaluates VLMs on real-world visual spelling correction across Chinese and English.
arXiv Detail & Related papers (2025-09-22T07:10:42Z) - Towards Understanding Visual Grounding in Visual Language Models [2.553589584067239]
Visual grounding refers to the ability of a model to identify a region within some visual input that matches a textual description.<n>We review works across the key areas of research on modern general-purpose vision language models (VLMs)
arXiv Detail & Related papers (2025-09-12T15:33:49Z) - A Hitchhikers Guide to Fine-Grained Face Forgery Detection Using Common Sense Reasoning [9.786907179872815]
The potential of vision and language remains underexplored in face forgery detection.
There is a need for a methodology that converts face forgery detection to a Visual Question Answering (VQA) task.
We propose a multi-staged approach that diverges from the traditional binary decision paradigm to address this gap.
arXiv Detail & Related papers (2024-10-01T08:16:40Z) - WIDIn: Wording Image for Domain-Invariant Representation in Single-Source Domain Generalization [63.98650220772378]
We present WIDIn, Wording Images for Domain-Invariant representation, to disentangle discriminative visual representation.
We first estimate the language embedding with fine-grained alignment, which can be used to adaptively identify and then remove domain-specific counterpart.
We show that WIDIn can be applied to both pretrained vision-language models like CLIP, and separately trained uni-modal models like MoCo and BERT.
arXiv Detail & Related papers (2024-05-28T17:46:27Z) - Dual-Image Enhanced CLIP for Zero-Shot Anomaly Detection [58.228940066769596]
We introduce a Dual-Image Enhanced CLIP approach, leveraging a joint vision-language scoring system.
Our methods process pairs of images, utilizing each as a visual reference for the other, thereby enriching the inference process with visual context.
Our approach significantly exploits the potential of vision-language joint anomaly detection and demonstrates comparable performance with current SOTA methods across various datasets.
arXiv Detail & Related papers (2024-05-08T03:13:20Z) - Visual Analytics for Efficient Image Exploration and User-Guided Image
Captioning [35.47078178526536]
Recent advancements in pre-trained large-scale language-image models have ushered in a new era of visual comprehension.
This paper tackles two well-known issues within the realm of visual analytics: (1) the efficient exploration of large-scale image datasets and identification of potential data biases within them; (2) the evaluation of image captions and steering of their generation process.
arXiv Detail & Related papers (2023-11-02T06:21:35Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Advancing Visual Grounding with Scene Knowledge: Benchmark and Method [74.72663425217522]
Visual grounding (VG) aims to establish fine-grained alignment between vision and language.
Most existing VG datasets are constructed using simple description texts.
We propose a novel benchmark of underlineScene underlineKnowledge-guided underlineVisual underlineGrounding.
arXiv Detail & Related papers (2023-07-21T13:06:02Z) - Unveiling the Unseen: A Comprehensive Survey on Explainable Anomaly Detection in Images and Videos [49.07140708026425]
Anomaly detection and localization in visual data, including images and videos, are crucial in machine learning and real-world applications.<n>This paper provides the first comprehensive survey focused specifically on explainable 2D visual anomaly detection (X-VAD)<n>We present a literature review of explainable methods, categorized by their underlying techniques.<n>We discuss promising future directions and open problems, including quantifying explanation quality.
arXiv Detail & Related papers (2023-02-13T20:17:41Z) - MAF: Multimodal Alignment Framework for Weakly-Supervised Phrase
Grounding [74.33171794972688]
We present algorithms to model phrase-object relevance by leveraging fine-grained visual representations and visually-aware language representations.
Experiments conducted on the widely-adopted Flickr30k dataset show a significant improvement over existing weakly-supervised methods.
arXiv Detail & Related papers (2020-10-12T00:43:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.