CriSp: Leveraging Tread Depth Maps for Enhanced Crime-Scene Shoeprint Matching
- URL: http://arxiv.org/abs/2404.16972v2
- Date: Tue, 30 Jul 2024 19:07:49 GMT
- Title: CriSp: Leveraging Tread Depth Maps for Enhanced Crime-Scene Shoeprint Matching
- Authors: Samia Shafique, Shu Kong, Charless Fowlkes,
- Abstract summary: Shoeprints are a common type of evidence found at crime scenes and are used regularly in forensic investigations.
Existing methods cannot effectively employ deep learning techniques to match noisy and occluded crime-scene shoeprints to a shoe database.
We propose CriSp, which matches crime-scene shoeprints to tread depth maps by training on this data.
CriSp significantly outperforms state-of-the-art methods in both automated shoeprint matching and image retrieval tailored to this task.
- Score: 8.153893958726117
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shoeprints are a common type of evidence found at crime scenes and are used regularly in forensic investigations. However, existing methods cannot effectively employ deep learning techniques to match noisy and occluded crime-scene shoeprints to a shoe database due to a lack of training data. Moreover, all existing methods match crime-scene shoeprints to clean reference prints, yet our analysis shows matching to more informative tread depth maps yields better retrieval results. The matching task is further complicated by the necessity to identify similarities only in corresponding regions (heels, toes, etc) of prints and shoe treads. To overcome these challenges, we leverage shoe tread images from online retailers and utilize an off-the-shelf predictor to estimate depth maps and clean prints. Our method, named CriSp, matches crime-scene shoeprints to tread depth maps by training on this data. CriSp incorporates data augmentation to simulate crime-scene shoeprints, an encoder to learn spatially-aware features, and a masking module to ensure only visible regions of crime-scene prints affect retrieval results. To validate our approach, we introduce two validation sets by reprocessing existing datasets of crime-scene shoeprints and establish a benchmarking protocol for comparison. On this benchmark, CriSp significantly outperforms state-of-the-art methods in both automated shoeprint matching and image retrieval tailored to this task.
Related papers
- Generating Automatically Print/Scan Textures for Morphing Attack Detection Applications [7.287930923353593]
One of the main scenarios is printing morphed images and submitting the respective print in a passport application process.
Small datasets are available to train the MAD algorithm because of privacy concerns.
This paper proposes two different methods based on transfer-transfer for automatically creating digital print/scan face images.
arXiv Detail & Related papers (2024-08-18T17:53:26Z) - Improving and Evaluating Machine Learning Methods for Forensic Shoeprint Matching [0.2509487459755192]
We propose a machine learning pipeline for forensic shoeprint pattern matching.
We extract 2D coordinates from shoeprint scans using edge detection and align the two shoeprints with iterative closest point (ICP)
We then extract similarity metrics to quantify how well the two prints match and use these metrics to train a random forest.
arXiv Detail & Related papers (2024-04-02T15:24:25Z) - A Fixed-Point Approach to Unified Prompt-Based Counting [51.20608895374113]
This paper aims to establish a comprehensive prompt-based counting framework capable of generating density maps for objects indicated by various prompt types, such as box, point, and text.
Our model excels in prominent class-agnostic datasets and exhibits superior performance in cross-dataset adaptation tasks.
arXiv Detail & Related papers (2024-03-15T12:05:44Z) - Comprint: Image Forgery Detection and Localization using Compression
Fingerprints [19.54952278001317]
Comprint is a novel forgery detection and localization method based on the compression fingerprint or comprint.
We propose a fusion of Comprint with the state-of-the-art Noiseprint, which utilizes a complementary camera model fingerprint.
Comprint and the fusion Comprint+Noiseprint represent a promising forensics tool to analyze in-the-wild tampered images.
arXiv Detail & Related papers (2022-10-05T13:05:18Z) - ShoeRinsics: Shoeprint Prediction for Forensics with Intrinsic
Decomposition [29.408442567550004]
We propose to leverage shoe tread photographs collected by online retailers.
We develop a model that performs intrinsic image decomposition from a single tread photo.
Our approach, which we term ShoeRinsics, combines domain adaptation and re-rendering losses in order to leverage a mix of fully supervised synthetic data and unsupervised retail image data.
arXiv Detail & Related papers (2022-05-04T23:42:55Z) - SpoofGAN: Synthetic Fingerprint Spoof Images [47.87570819350573]
A major limitation to advances in fingerprint spoof detection is the lack of publicly available, large-scale fingerprint spoof datasets.
This work aims to demonstrate the utility of synthetic (both live and spoof) fingerprints in supplying these algorithms with sufficient data.
arXiv Detail & Related papers (2022-04-13T16:27:27Z) - Learning Co-segmentation by Segment Swapping for Retrieval and Discovery [67.6609943904996]
The goal of this work is to efficiently identify visually similar patterns from a pair of images.
We generate synthetic training pairs by selecting object segments in an image and copy-pasting them into another image.
We show our approach provides clear improvements for artwork details retrieval on the Brueghel dataset.
arXiv Detail & Related papers (2021-10-29T16:51:16Z) - Grasp-Oriented Fine-grained Cloth Segmentation without Real Supervision [66.56535902642085]
This paper tackles the problem of fine-grained region detection in deformed clothes using only a depth image.
We define up to 6 semantic regions of varying extent, including edges on the neckline, sleeve cuffs, and hem, plus top and bottom grasping points.
We introduce a U-net based network to segment and label these parts.
We show that training our network solely with synthetic data and the proposed DA yields results competitive with models trained on real data.
arXiv Detail & Related papers (2021-10-06T16:31:20Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Hallucinating Saliency Maps for Fine-Grained Image Classification for
Limited Data Domains [27.91871214060683]
We propose an approach which does not require explicit saliency maps to improve image classification.
We show that our approach obtains similar results as the case when the saliency maps are provided explicitely.
In addition, we show that our saliency estimation method, which is trained without any saliency groundtruth data, obtains competitive results on real image saliency benchmark (Toronto)
arXiv Detail & Related papers (2020-07-24T15:08:55Z) - Latent Fingerprint Registration via Matching Densely Sampled Points [100.53031290339483]
Existing latent fingerprint registration approaches are mainly based on establishing correspondences between minutiae.
We propose a non-minutia latent fingerprint registration method which estimates the spatial transformation between a pair of fingerprints.
The proposed method achieves the state-of-the-art registration performance, especially under challenging conditions.
arXiv Detail & Related papers (2020-05-12T15:51:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.