Seeing Through the Blur: Unlocking Defocus Maps for Deepfake Detection
- URL: http://arxiv.org/abs/2509.23289v1
- Date: Sat, 27 Sep 2025 13:02:53 GMT
- Title: Seeing Through the Blur: Unlocking Defocus Maps for Deepfake Detection
- Authors: Minsun Jeon, Simon S. Woo,
- Abstract summary: generative AI has enabled the mass production of photorealistic synthetic images, blurring the boundary between authentic and fabricated visual content.<n>We propose a physically interpretable deepfake detection framework and demonstrate that defocus blur can serve as an effective forensic signal.<n>We aim for our defocus-based detection pipeline and interpretability tools to contribute meaningfully to ongoing research in media forensics.
- Score: 27.079338164502634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid advancement of generative AI has enabled the mass production of photorealistic synthetic images, blurring the boundary between authentic and fabricated visual content. This challenge is particularly evident in deepfake scenarios involving facial manipulation, but also extends to broader AI-generated content (AIGC) cases involving fully synthesized scenes. As such content becomes increasingly difficult to distinguish from reality, the integrity of visual media is under threat. To address this issue, we propose a physically interpretable deepfake detection framework and demonstrate that defocus blur can serve as an effective forensic signal. Defocus blur is a depth-dependent optical phenomenon that naturally occurs in camera-captured images due to lens focus and scene geometry. In contrast, synthetic images often lack realistic depth-of-field (DoF) characteristics. To capture these discrepancies, we construct a defocus blur map and use it as a discriminative feature for detecting manipulated content. Unlike RGB textures or frequency-domain signals, defocus blur arises universally from optical imaging principles and encodes physical scene structure. This makes it a robust and generalizable forensic cue. Our approach is supported by three in-depth feature analyses, and experimental results confirm that defocus blur provides a reliable and interpretable cue for identifying synthetic images. We aim for our defocus-based detection pipeline and interpretability tools to contribute meaningfully to ongoing research in media forensics. The implementation is publicly available at: https://github.com/irissun9602/Defocus-Deepfake-Detection
Related papers
- Unmasking Facial DeepFakes: A Robust Multiview Detection Framework for Natural Images [1.5049442691806052]
We propose a multi-view architecture that enhances DeepFake detection by analyzing facial features at multiple levels.<n>Our approach integrates three specialized encoders, a global view encoder for detecting boundary inconsistencies, a middle view encoder for analyzing texture and color alignment, and a local view encoder for capturing distortions in expressive facial regions.<n>By fusing features from these encoders, our model achieves superior performance in detecting manipulated images, even under challenging pose and lighting conditions.
arXiv Detail & Related papers (2025-10-17T12:16:04Z) - Fine-grained Defocus Blur Control for Generative Image Models [66.30016220484394]
Current text-to-image diffusion models excel at generating diverse, high-quality images.<n>We introduce a novel text-to-image diffusion framework that leverages camera metadata.<n>Our model enables superior fine-grained control without altering the depicted scene.
arXiv Detail & Related papers (2025-10-07T17:59:15Z) - Dark Channel-Assisted Depth-from-Defocus from a Single Image [4.005483185111993]
We estimate scene depth from a single defocus-blurred image using the dark channel as a complementary cue.<n>Our method uses the relationship between local defocus blur and contrast variations as depth cues to improve scene structure estimation.
arXiv Detail & Related papers (2025-06-07T03:49:26Z) - Deepfake detection by exploiting surface anomalies: the SurFake approach [29.088218634944116]
This paper investigates how deepfake creation can impact on the characteristics that the whole scene had at the time of the acquisition.
By resorting to the analysis of the characteristics of the surfaces depicted in the image it is possible to obtain a descriptor usable to train a CNN for deepfake detection.
arXiv Detail & Related papers (2023-10-31T16:54:14Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - Fully Self-Supervised Depth Estimation from Defocus Clue [79.63579768496159]
We propose a self-supervised framework that estimates depth purely from a sparse focal stack.
We show that our framework circumvents the needs for the depth and AIF image ground-truth, and receives superior predictions.
arXiv Detail & Related papers (2023-03-19T19:59:48Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Learning to Deblur using Light Field Generated and Real Defocus Images [4.926805108788465]
Defocus deblurring is a challenging task due to the spatially varying nature of defocus blur.
We propose a novel deep defocus deblurring network that leverages the strength and overcomes the shortcoming of light fields.
arXiv Detail & Related papers (2022-04-01T11:35:51Z) - Onfocus Detection: Identifying Individual-Camera Eye Contact from
Unconstrained Images [81.64699115587167]
Onfocus detection aims at identifying whether the focus of the individual captured by a camera is on the camera or not.
We build a large-scale onfocus detection dataset, named as the OnFocus Detection In the Wild (OFDIW)
We propose a novel end-to-end deep model, i.e., the eye-context interaction inferring network (ECIIN) for onfocus detection.
arXiv Detail & Related papers (2021-03-29T03:29:09Z) - Deep Autofocus for Synthetic Aperture Sonar [28.306713374371814]
In this letter, we demonstrate the potential of machine learning, specifically deep learning, to address the autofocus problem.
We formulate the problem as a self-supervised, phase error estimation task using a deep network we call Deep Autofocus.
Our results demonstrate Deep Autofocus can produce imagery that is perceptually as good as benchmark iterative techniques but at a substantially lower computational cost.
arXiv Detail & Related papers (2020-10-29T15:31:15Z) - Focus on defocus: bridging the synthetic to real domain gap for depth
estimation [9.023847175654602]
We tackle the issue of closing the synthetic-real domain gap by using domain invariant defocus blur as direct supervision.
We leverage defocus cues by using a permutation invariant convolutional neural network that encourages the network to learn from the differences between images with a different point of focus.
We are able to train our model completely on synthetic data and directly apply it to a wide range of real-world images.
arXiv Detail & Related papers (2020-05-19T17:52:37Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.