Scientific Image Tampering Detection Based On Noise Inconsistencies: A
Method And Datasets
- URL: http://arxiv.org/abs/2001.07799v2
- Date: Wed, 4 Mar 2020 20:46:46 GMT
- Title: Scientific Image Tampering Detection Based On Noise Inconsistencies: A
Method And Datasets
- Authors: Ziyue Xiang, Daniel E. Acuna
- Abstract summary: We propose a scientific-image specific tampering detection method based on noise inconsistencies.
We train and test our method on a new dataset of manipulated western blot and microscopy imagery.
- Score: 1.2691047660244335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scientific image tampering is a problem that affects not only authors but
also the general perception of the research community. Although previous
researchers have developed methods to identify tampering in natural images,
these methods may not thrive under the scientific setting as scientific images
have different statistics, format, quality, and intentions. Therefore, we
propose a scientific-image specific tampering detection method based on noise
inconsistencies, which is capable of learning and generalizing to different
fields of science. We train and test our method on a new dataset of manipulated
western blot and microscopy imagery, which aims at emulating problematic images
in science. The test results show that our method can detect various types of
image manipulation in different scenarios robustly, and it outperforms existing
general-purpose image tampering detection schemes. We discuss applications
beyond these two types of images and suggest next steps for making detection of
problematic images a systematic step in peer review and science in general.
Related papers
- Knowledge-Guided Prompt Learning for Deepfake Facial Image Detection [54.26588902144298]
We propose a knowledge-guided prompt learning method for deepfake facial image detection.
Specifically, we retrieve forgery-related prompts from large language models as expert knowledge to guide the optimization of learnable prompts.
Our proposed approach notably outperforms state-of-the-art methods.
arXiv Detail & Related papers (2025-01-01T02:18:18Z) - A Comparative Study of Image Denoising Algorithms [0.0]
Digital images play a significant part and backbone role in many areas like image processing, vision computing, robotics, and bio-medical.
Images are likely to get corrupted or degraded by the available of degradation factors.
Several image denoising algorithms have been proposed in the literature focusing on robust, low-cost and fast techniques to improve output performance.
arXiv Detail & Related papers (2024-12-07T01:23:10Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Semantic Contextualization of Face Forgery: A New Definition, Dataset, and Detection Method [77.65459419417533]
We put face forgery in a semantic context and define that computational methods that alter semantic face attributes are sources of face forgery.
We construct a large face forgery image dataset, where each image is associated with a set of labels organized in a hierarchical graph.
We propose a semantics-oriented face forgery detection method that captures label relations and prioritizes the primary task.
arXiv Detail & Related papers (2024-05-14T10:24:19Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - ObjectFormer for Image Manipulation Detection and Localization [118.89882740099137]
We propose ObjectFormer to detect and localize image manipulations.
We extract high-frequency features of the images and combine them with RGB features as multimodal patch embeddings.
We conduct extensive experiments on various datasets and the results verify the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-03-28T12:27:34Z) - Learning to identify image manipulations in scientific publications [37.6933210164122]
We propose a framework that combines image processing and deep learning methods to classify images in the articles as duplicated or unduplicated ones.
We show that our method leads to a 90% accuracy rate of detecting duplicated images, a 13% improvement in detection accuracy in comparison to other manipulation detection methods.
arXiv Detail & Related papers (2021-02-03T04:47:34Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Learning Numerical Observers using Unsupervised Domain Adaptation [13.548174682737756]
Medical imaging systems are commonly assessed by use of objective image quality measures.
Supervised deep learning methods have been investigated to implement numerical observers for task-based image quality assessment.
labeling large amounts of experimental data to train deep neural networks is tedious, expensive, and prone to subjective errors.
arXiv Detail & Related papers (2020-02-03T22:58:28Z) - Fabricated Pictures Detection with Graph Matching [0.36832029288386137]
Fabricating experimental pictures in research work is a serious academic misconduct.
We present a framework to detect similar, or perhaps fabricated, pictures with the graph matching techniques.
arXiv Detail & Related papers (2020-01-16T12:29:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.