Can Shadows Reveal Biometric Information?
- URL: http://arxiv.org/abs/2209.10077v1
- Date: Wed, 21 Sep 2022 02:36:32 GMT
- Title: Can Shadows Reveal Biometric Information?
- Authors: Safa C. Medin, Amir Weiss, Fr\'edo Durand, William T. Freeman, Gregory
W. Wornell
- Abstract summary: We show that the biometric information leakage from shadows can be sufficient for reliable identity inference under representative scenarios.
We then develop a learning-based method that demonstrates this phenomenon in real settings.
- Score: 48.3561395627331
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of extracting biometric information of individuals by
looking at shadows of objects cast on diffuse surfaces. We show that the
biometric information leakage from shadows can be sufficient for reliable
identity inference under representative scenarios via a maximum likelihood
analysis. We then develop a learning-based method that demonstrates this
phenomenon in real settings, exploiting the subtle cues in the shadows that are
the source of the leakage without requiring any labeled real data. In
particular, our approach relies on building synthetic scenes composed of 3D
face models obtained from a single photograph of each identity. We transfer
what we learn from the synthetic data to the real data using domain adaptation
in a completely unsupervised way. Our model is able to generalize well to the
real domain and is robust to several variations in the scenes. We report high
classification accuracies in an identity classification task that takes place
in a scene with unknown geometry and occluding objects.
Related papers
- Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - Towards Zero-Shot Interpretable Human Recognition: A 2D-3D Registration Framework [16.15084484295732]
It is important to provide evidence able to be used for forensics/legal purposes (e.g., in courts)
This paper describes the first recognition framework/strategy that aims at addressing the three weaknesses simultaneously.
arXiv Detail & Related papers (2024-03-11T12:27:20Z) - Deepfake detection by exploiting surface anomalies: the SurFake approach [29.088218634944116]
This paper investigates how deepfake creation can impact on the characteristics that the whole scene had at the time of the acquisition.
By resorting to the analysis of the characteristics of the surfaces depicted in the image it is possible to obtain a descriptor usable to train a CNN for deepfake detection.
arXiv Detail & Related papers (2023-10-31T16:54:14Z) - DisPositioNet: Disentangled Pose and Identity in Semantic Image
Manipulation [83.51882381294357]
DisPositioNet is a model that learns a disentangled representation for each object for the task of image manipulation using scene graphs.
Our framework enables the disentanglement of the variational latent embeddings as well as the feature representation in the graph.
arXiv Detail & Related papers (2022-11-10T11:47:37Z) - Towards 3D Scene Understanding by Referring Synthetic Models [65.74211112607315]
Methods typically alleviate on-extensive annotations on real scene scans.
We explore how synthetic models rely on real scene categories of synthetic features to a unified feature space.
Experiments show that our method achieves the average mAP of 46.08% on the ScanNet S3DIS dataset and 55.49% by learning datasets.
arXiv Detail & Related papers (2022-03-20T13:06:15Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Generation of Non-Deterministic Synthetic Face Datasets Guided by
Identity Priors [19.095368725147367]
We propose a non-deterministic method for generating mated face images by exploiting the well-structured latent space of StyleGAN.
We create a new dataset of synthetic face images (SymFace) consisting of 77,034 samples including 25,919 synthetic IDs.
arXiv Detail & Related papers (2021-12-07T11:08:47Z) - Fake It Till You Make It: Face analysis in the wild using synthetic data
alone [9.081019005437309]
We show that it is possible to perform face-related computer vision in the wild using synthetic data alone.
We describe how to combine a procedurally-generated 3D face model with a comprehensive library of hand-crafted assets to render training images with unprecedented realism.
arXiv Detail & Related papers (2021-09-30T13:07:04Z) - Face Forgery Detection by 3D Decomposition [72.22610063489248]
We consider a face image as the production of the intervention of the underlying 3D geometry and the lighting environment.
By disentangling the face image into 3D shape, common texture, identity texture, ambient light, and direct light, we find the devil lies in the direct light and the identity texture.
We propose to utilize facial detail, which is the combination of direct light and identity texture, as the clue to detect the subtle forgery patterns.
arXiv Detail & Related papers (2020-11-19T09:25:44Z) - Methodology for Building Synthetic Datasets with Virtual Humans [1.5556923898855324]
Large datasets can be used for improved, targeted training of deep neural networks.
In particular, we make use of a 3D morphable face model for the rendering of multiple 2D images across a dataset of 100 synthetic identities.
arXiv Detail & Related papers (2020-06-21T10:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.