Fake-image detection with Robust Hashing
- URL: http://arxiv.org/abs/2102.01313v1
- Date: Tue, 2 Feb 2021 05:10:37 GMT
- Title: Fake-image detection with Robust Hashing
- Authors: Miki Tanaka, Kiya Hitoshi
- Abstract summary: We investigate whether robust hashing has a possibility to robustly detect fake-images even when multiple manipulation techniques are applied to images for the first time.
In an experiment, the proposed fake detection with robust hashing is demonstrated to outperform state-of-the-art one under the use of various datasets including fake images generated with GANs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate whether robust hashing has a possibility to
robustly detect fake-images even when multiple manipulation techniques such as
JPEG compression are applied to images for the first time. In an experiment,
the proposed fake detection with robust hashing is demonstrated to outperform
state-of-the-art one under the use of various datasets including fake images
generated with GANs.
Related papers
- Detect Fake with Fake: Leveraging Synthetic Data-driven Representation for Synthetic Image Detection [7.730666100347136]
We show the effectiveness of synthetic data-driven representations for synthetic image detection.
We find that vision transformers trained by the latest visual representation learners with synthetic data can effectively distinguish fake from real images.
arXiv Detail & Related papers (2024-09-13T14:50:14Z) - FSBI: Deepfakes Detection with Frequency Enhanced Self-Blended Images [17.707379977847026]
This paper introduces a Frequency Enhanced Self-Blended Images approach for deepfakes detection.
The proposed approach has been evaluated on FF++ and Celeb-DF datasets.
arXiv Detail & Related papers (2024-06-12T20:15:00Z) - MMNet: Multi-Collaboration and Multi-Supervision Network for Sequential
Deepfake Detection [81.59191603867586]
Sequential deepfake detection aims to identify forged facial regions with the correct sequence for recovery.
The recovery of forged images requires knowledge of the manipulation model to implement inverse transformations.
We propose Multi-Collaboration and Multi-Supervision Network (MMNet) that handles various spatial scales and sequential permutations in forged face images.
arXiv Detail & Related papers (2023-07-06T02:32:08Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Detecting Images Generated by Diffusers [12.986394431694206]
We consider images generated from captions in the MSCOCO and Wikimedia datasets using two state-of-the-art models: Stable Diffusion and GLIDE.
Our experiments show that it is possible to detect the generated images using simple Multi-Layer Perceptrons.
We find that incorporating the associated textual information with the images rarely leads to significant improvement in detection results.
arXiv Detail & Related papers (2023-03-09T14:14:29Z) - ObjectFormer for Image Manipulation Detection and Localization [118.89882740099137]
We propose ObjectFormer to detect and localize image manipulations.
We extract high-frequency features of the images and combine them with RGB features as multimodal patch embeddings.
We conduct extensive experiments on various datasets and the results verify the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-03-28T12:27:34Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - Effects of Image Compression on Face Image Manipulation Detection: A
Case Study on Facial Retouching [14.92708078957906]
The effects of image compression on face image manipulation detection are analyzed.
A case study on facial retouching detection under the influence of image compression is presented.
arXiv Detail & Related papers (2021-03-05T13:28:28Z) - Identifying Invariant Texture Violation for Robust Deepfake Detection [17.306386179823576]
We propose the Invariant Texture Learning framework, which only accesses the published dataset with low visual quality.
Our method is based on the prior that the microscopic facial texture of the source face is inevitably violated by the texture transferred from the target person.
arXiv Detail & Related papers (2020-12-19T03:02:15Z) - What makes fake images detectable? Understanding properties that
generalize [55.4211069143719]
Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
arXiv Detail & Related papers (2020-08-24T17:50:28Z) - Discernible Image Compression [124.08063151879173]
This paper aims to produce compressed images by pursuing both appearance and perceptual consistency.
Based on the encoder-decoder framework, we propose using a pre-trained CNN to extract features of the original and compressed images.
Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.
arXiv Detail & Related papers (2020-02-17T07:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.