Fast refacing of MR images with a generative neural network lowers
re-identification risk and preserves volumetric consistency
- URL: http://arxiv.org/abs/2305.16922v1
- Date: Fri, 26 May 2023 13:34:14 GMT
- Title: Fast refacing of MR images with a generative neural network lowers
re-identification risk and preserves volumetric consistency
- Authors: Nataliia Molchanova, B\'en\'edicte Mar\'echal, Jean-Philippe Thiran,
Tobias Kober, Till Huelnhagen, Jonas Richiardi
- Abstract summary: We propose a novel method for anonymised face generation for 3D T1-weighted scans based on a 3D conditional generative adversarial network.
The proposed method takes 9 seconds for face generation and is suitable for recovering consistent post-processing results after defacing.
- Score: 5.040145546652933
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: With the rise of open data, identifiability of individuals based on 3D
renderings obtained from routine structural magnetic resonance imaging (MRI)
scans of the head has become a growing privacy concern. To protect subject
privacy, several algorithms have been developed to de-identify imaging data
using blurring, defacing or refacing. Completely removing facial structures
provides the best re-identification protection but can significantly impact
post-processing steps, like brain morphometry. As an alternative, refacing
methods that replace individual facial structures with generic templates have a
lower effect on the geometry and intensity distribution of original scans, and
are able to provide more consistent post-processing results by the price of
higher re-identification risk and computational complexity. In the current
study, we propose a novel method for anonymised face generation for defaced 3D
T1-weighted scans based on a 3D conditional generative adversarial network. To
evaluate the performance of the proposed de-identification tool, a comparative
study was conducted between several existing defacing and refacing tools, with
two different segmentation algorithms (FAST and Morphobox). The aim was to
evaluate (i) impact on brain morphometry reproducibility, (ii)
re-identification risk, (iii) balance between (i) and (ii), and (iv) the
processing time. The proposed method takes 9 seconds for face generation and is
suitable for recovering consistent post-processing results after defacing.
Related papers
- Synthetic Forehead-creases Biometric Generation for Reliable User Verification [6.639785884921617]
We present a new framework to synthesize forehead-crease image data while maintaining important features, such as uniqueness and realism.
We evaluate the diversity and realism of the generated forehead-crease images using the Fr'echet Inception Distance (FID) and the Structural Similarity Index Measure (SSIM)
arXiv Detail & Related papers (2024-08-28T10:33:00Z) - AI-based association analysis for medical imaging using latent-space
geometric confounder correction [6.488049546344972]
We introduce an AI method emphasizing semantic feature interpretation and resilience against multiple confounders.
Our approach's merits are tested in three scenarios: extracting confounder-free features from a 2D synthetic dataset; examining the association between prenatal alcohol exposure and children's facial shapes using 3D mesh data.
Results confirm our method effectively reduces confounder influences, establishing less confounded associations.
arXiv Detail & Related papers (2023-10-03T16:09:07Z) - Semantic-aware One-shot Face Re-enactment with Dense Correspondence
Estimation [100.60938767993088]
One-shot face re-enactment is a challenging task due to the identity mismatch between source and driving faces.
This paper proposes to use 3D Morphable Model (3DMM) for explicit facial semantic decomposition and identity disentanglement.
arXiv Detail & Related papers (2022-11-23T03:02:34Z) - NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction [64.36535692191343]
Implicit neural representations have shown compelling results in offline 3D reconstruction and also recently demonstrated the potential for online SLAM systems.
This paper addresses two key challenges: 1) seeking a criterion to measure the quality of the candidate viewpoints for the view planning based on the new representations, and 2) learning the criterion from data that can generalize to different scenes instead of hand-crafting one.
Our method demonstrates significant improvements on various metrics for the rendered image quality and the geometry quality of the reconstructed 3D models when compared with variants using TSDF or reconstruction without view planning.
arXiv Detail & Related papers (2022-07-22T10:05:36Z) - Probabilistic 3D surface reconstruction from sparse MRI information [58.14653650521129]
We present a novel probabilistic deep learning approach for concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric uncertainty prediction.
Our method is capable of reconstructing large surface meshes from three quasi-orthogonal MR imaging slices from limited training sets.
arXiv Detail & Related papers (2020-10-05T14:18:52Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - Human Recognition Using Face in Computed Tomography [26.435782518817295]
We propose an automatic processing pipeline that first detects facial landmarks in 3D for ROI extraction and then generates aligned 2D depth images, which are used for automatic recognition.
Our method achieves a 1:56 identification accuracy of 92.53% and a 1:1 verification accuracy of 96.12%, outperforming other competing approaches.
arXiv Detail & Related papers (2020-05-28T18:59:59Z) - Deep Face Super-Resolution with Iterative Collaboration between
Attentive Recovery and Landmark Estimation [92.86123832948809]
We propose a deep face super-resolution (FSR) method with iterative collaboration between two recurrent networks.
In each recurrent step, the recovery branch utilizes the prior knowledge of landmarks to yield higher-quality images.
A new attentive fusion module is designed to strengthen the guidance of landmark maps.
arXiv Detail & Related papers (2020-03-29T16:04:48Z) - SD-GAN: Structural and Denoising GAN reveals facial parts under
occlusion [7.284661356980246]
We propose a generative model to reconstruct the missing parts of the face which are under occlusion.
A novel adversarial training algorithm has been designed for a bimodal mutually exclusive Generative Adversarial Network (GAN) model.
Our proposed technique outperforms the competing methods by a considerable margin, even for boosting the performance of Face Recognition.
arXiv Detail & Related papers (2020-02-19T21:12:49Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.