TeLL Me what you cant see
- URL: http://arxiv.org/abs/2503.19478v1
- Date: Tue, 25 Mar 2025 09:12:59 GMT
- Title: TeLL Me what you cant see
- Authors: Saverio Cavasin, Pietro Biasetton, Mattia Tamiazzo, Mauro Conti, Simone Milani,
- Abstract summary: Law enforcement agencies often face challenges related to the scarcity of high-quality images or their obsolescence.<n>This paper introduces a novel forensic mugshot augmentation framework aimed at addressing these limitations.
- Score: 17.66342269632214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: During criminal investigations, images of persons of interest directly influence the success of identification procedures. However, law enforcement agencies often face challenges related to the scarcity of high-quality images or their obsolescence, which can affect the accuracy and success of people searching processes. This paper introduces a novel forensic mugshot augmentation framework aimed at addressing these limitations. Our approach enhances the identification probability of individuals by generating additional, high-quality images through customizable data augmentation techniques, while maintaining the biometric integrity and consistency of the original data. Several experimental results show that our method significantly improves identification accuracy and robustness across various forensic scenarios, demonstrating its effectiveness as a trustworthy tool law enforcement applications. Index Terms: Digital Forensics, Person re-identification, Feature extraction, Data augmentation, Visual-Language models.
Related papers
- FakeScope: Large Multimodal Expert Model for Transparent AI-Generated Image Forensics [66.14786900470158]
We propose FakeScope, an expert multimodal model (LMM) tailored for AI-generated image forensics.
FakeScope identifies AI-synthetic images with high accuracy and provides rich, interpretable, and query-driven forensic insights.
FakeScope achieves state-of-the-art performance in both closed-ended and open-ended forensic scenarios.
arXiv Detail & Related papers (2025-03-31T16:12:48Z) - Anonymization Prompt Learning for Facial Privacy-Preserving Text-to-Image Generation [56.46932751058042]
We train a learnable prompt prefix for text-to-image diffusion models, which forces the model to generate anonymized facial identities.
Experiments demonstrate the successful anonymization performance of APL, which anonymizes any specific individuals without compromising the quality of non-identity-specific image generation.
arXiv Detail & Related papers (2024-05-27T07:38:26Z) - TetraLoss: Improving the Robustness of Face Recognition against Morphing Attacks [6.492755549391469]
Face recognition systems are widely deployed in high-security applications.<n>Digital manipulations, such as face morphing, pose a security threat to face recognition systems.<n>We present a novel method for adapting deep learning-based face recognition systems to be more robust against face morphing attacks.
arXiv Detail & Related papers (2024-01-21T21:04:05Z) - Individualized Deepfake Detection Exploiting Traces Due to Double
Neural-Network Operations [32.33331065408444]
Existing deepfake detectors are not optimized for this detection task when an image is associated with a specific and identifiable individual.
This study focuses on the deepfake detection of facial images of individual public figures.
We demonstrate that the detection performance can be improved by exploiting the idempotency property of neural networks.
arXiv Detail & Related papers (2023-12-13T10:21:00Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - MMNet: Multi-Collaboration and Multi-Supervision Network for Sequential
Deepfake Detection [81.59191603867586]
Sequential deepfake detection aims to identify forged facial regions with the correct sequence for recovery.
The recovery of forged images requires knowledge of the manipulation model to implement inverse transformations.
We propose Multi-Collaboration and Multi-Supervision Network (MMNet) that handles various spatial scales and sequential permutations in forged face images.
arXiv Detail & Related papers (2023-07-06T02:32:08Z) - Disguise without Disruption: Utility-Preserving Face De-Identification [40.484745636190034]
We introduce Disguise, a novel algorithm that seamlessly de-identifies facial images while ensuring the usability of the modified data.
Our method involves extracting and substituting depicted identities with synthetic ones, generated using variational mechanisms to maximize obfuscation and non-invertibility.
We extensively evaluate our method using multiple datasets, demonstrating a higher de-identification rate and superior consistency compared to prior approaches in various downstream tasks.
arXiv Detail & Related papers (2023-03-23T13:50:46Z) - Psychophysical Evaluation of Human Performance in Detecting Digital Face
Image Manipulations [14.63266615325105]
This work introduces a web-based, remote visual discrimination experiment on the basis of principles adopted from the field of psychophysics.
We examine human proficiency in detecting different types of digitally manipulated face images, specifically face swapping, morphing, and retouching.
arXiv Detail & Related papers (2022-01-28T12:45:33Z) - IdentityDP: Differential Private Identification Protection for Face
Images [17.33916392050051]
Face de-identification, also known as face anonymization, refers to generating another image with similar appearance and the same background, while the real identity is hidden.
We propose IdentityDP, a face anonymization framework that combines a data-driven deep neural network with a differential privacy mechanism.
Our model can effectively obfuscate the identity-related information of faces, preserve significant visual similarity, and generate high-quality images.
arXiv Detail & Related papers (2021-03-02T14:26:00Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.