Cross-Domain Identity Representation for Skull to Face Matching with Benchmark DataSet
- URL: http://arxiv.org/abs/2507.08329v1
- Date: Fri, 11 Jul 2025 05:49:12 GMT
- Title: Cross-Domain Identity Representation for Skull to Face Matching with Benchmark DataSet
- Authors: Ravi Shankar Prasad, Dinesh Singh,
- Abstract summary: We present a framework for the identification of a person given the X-ray image of a skull using convolutional Siamese networks for cross-domain identity representation.<n>Siamese networks are twin networks that share the same architecture and can be trained to discover a feature space where nearby observations that are similar are grouped and dissimilar observations are moved apart.
- Score: 6.1655282360871375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Craniofacial reconstruction in forensic science is crucial for the identification of the victims of crimes and disasters. The objective is to map a given skull to its corresponding face in a corpus of faces with known identities using recent advancements in computer vision, such as deep learning. In this paper, we presented a framework for the identification of a person given the X-ray image of a skull using convolutional Siamese networks for cross-domain identity representation. Siamese networks are twin networks that share the same architecture and can be trained to discover a feature space where nearby observations that are similar are grouped and dissimilar observations are moved apart. To do this, the network is exposed to two sets of comparable and different data. The Euclidean distance is then minimized between similar pairs and maximized between dissimilar ones. Since getting pairs of skull and face images are difficult, we prepared our own dataset of 40 volunteers whose front and side skull X-ray images and optical face images were collected. Experiments were conducted on the collected cross-domain dataset to train and validate the Siamese networks. The experimental results provide satisfactory results on the identification of a person from the given skull.
Related papers
- SPOT-Face: Forensic Face Identification using Attention Guided Optimal Transport [2.9936254916060503]
SPOT-Face is a superpixel graph-based framework designed for cross-domain forensic face identification.<n>Our framework demonstrates to be highly effective for matching skulls and sketches to faces in forensic investigations.
arXiv Detail & Related papers (2026-01-14T07:02:21Z) - Cranio-ID: Graph-Based Craniofacial Identification via Automatic Landmark Annotation in 2D Multi-View X-rays [2.4382430407654767]
Traditional methods for locating craniometric landmarks are time-consuming and require specialized knowledge and expertise.<n>We propose a novel framework Cranio-ID: First, an automatic annotation of landmarks on 2D skulls with their respective optical images.<n>Second, cross-modal matching by formulating these landmarks into graph representations and then finding semantic correspondence between graphs of these two modalities.
arXiv Detail & Related papers (2025-11-18T12:15:22Z) - FCR: Investigating Generative AI models for Forensic Craniofacial Reconstruction [2.9936254916060503]
We propose a generic framework for craniofacial reconstruction from 2D X-ray images.<n>This is the first time where 2D X-rays are being used as a representation of the skull by generative models for craniofacial reconstruction.<n>By experimental results, we have found that this can be an effective tool for forensic science.
arXiv Detail & Related papers (2025-08-25T13:52:59Z) - Skull-to-Face: Anatomy-Guided 3D Facial Reconstruction and Editing [34.39385635485985]
Deducing the 3D face from a skull is a challenging task in forensic science and archaeology.<n>This paper proposes an end-to-end 3D face reconstruction pipeline and an exploration method.<n> Experiments conducted on a real skull-face dataset demonstrated the effectiveness of our proposed pipeline.
arXiv Detail & Related papers (2024-03-24T16:03:27Z) - Deep Learning Based Face Recognition Method using Siamese Network [0.0]
We propose employing Siamese networks for face recognition, eliminating the need for labeled face images.
We achieve this by strategically leveraging negative samples alongside nearest neighbor counterparts.
The proposed unsupervised system delivers a performance on par with a similar but fully supervised baseline.
arXiv Detail & Related papers (2023-12-21T16:35:11Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - Benchmarking Human Face Similarity Using Identical Twins [5.93228031688634]
The problem of distinguishing identical twins and non-twin look-alikes in automated facial recognition (FR) applications has become increasingly important.
This work presents an application of one of the largest twin datasets compiled to date to address two FR challenges.
arXiv Detail & Related papers (2022-08-25T01:45:02Z) - Simultaneous Bone and Shadow Segmentation Network using Task
Correspondence Consistency [60.378180265885945]
We propose a single end-to-end network with a shared transformer-based encoder and task independent decoders for simultaneous bone and shadow segmentation.
We also introduce a correspondence consistency loss which makes sure that network utilizes the inter-dependency between the bone surface and its corresponding shadow to refine the segmentation.
arXiv Detail & Related papers (2022-06-16T22:37:05Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Fused Deep Neural Network based Transfer Learning in Occluded Face
Classification and Person re-Identification [0.0]
This paper aims to recognize the occlusion of one of four types in face images.
Various transfer learning methods were tested, and the results show that MobileNet V2 with Gated Recurrent Unit(GRU) performs better than any other Transfer Learning methods.
arXiv Detail & Related papers (2022-05-15T07:13:33Z) - Learning Co-segmentation by Segment Swapping for Retrieval and Discovery [67.6609943904996]
The goal of this work is to efficiently identify visually similar patterns from a pair of images.
We generate synthetic training pairs by selecting object segments in an image and copy-pasting them into another image.
We show our approach provides clear improvements for artwork details retrieval on the Brueghel dataset.
arXiv Detail & Related papers (2021-10-29T16:51:16Z) - Robust Facial Landmark Detection by Cross-order Cross-semantic Deep
Network [58.843211405385205]
We propose a cross-order cross-semantic deep network (CCDN) to boost the semantic features learning for robust facial landmark detection.
Specifically, a cross-order two-squeeze multi-excitation (CTM) module is proposed to introduce the cross-order channel correlations for more discriminative representations learning.
A novel cross-order cross-semantic (COCS) regularizer is designed to drive the network to learn cross-order cross-semantic features from different activation for facial landmark detection.
arXiv Detail & Related papers (2020-11-16T08:19:26Z) - Unsupervised Landmark Learning from Unpaired Data [117.81440795184587]
Recent attempts for unsupervised landmark learning leverage synthesized image pairs that are similar in appearance but different in poses.
We propose a cross-image cycle consistency framework which applies the swapping-reconstruction strategy twice to obtain the final supervision.
Our proposed framework is shown to outperform strong baselines by a large margin.
arXiv Detail & Related papers (2020-06-29T13:57:20Z) - Ear2Face: Deep Biometric Modality Mapping [9.560980936110234]
We present an end-to-end deep neural network model that learns a mapping between the biometric modalities.
We formulated the problem as a paired image-to-image translation task and collected datasets of ear and face image pairs.
We have achieved very promising results, especially on the FERET dataset, generating visually appealing face images from ear image inputs.
arXiv Detail & Related papers (2020-06-02T21:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.