Deepfakes for Medical Video De-Identification: Privacy Protection and
Diagnostic Information Preservation
- URL: http://arxiv.org/abs/2003.00813v1
- Date: Fri, 7 Feb 2020 22:36:48 GMT
- Title: Deepfakes for Medical Video De-Identification: Privacy Protection and
Diagnostic Information Preservation
- Authors: Bingquan Zhu, Hao Fang, Yanan Sui, Luming Li
- Abstract summary: Face-swapping as a de-identification approach is reliable, and it keeps the keypoints almost invariant, significantly better than traditional methods.
This study proposes a pipeline for video de-identification and keypoint preservation, clearing up some ethical restrictions for medical data sharing.
- Score: 12.10092482860325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data sharing for medical research has been difficult as open-sourcing
clinical data may violate patient privacy. Traditional methods for face
de-identification wipe out facial information entirely, making it impossible to
analyze facial behavior. Recent advancements on whole-body keypoints detection
also rely on facial input to estimate body keypoints. Both facial and body
keypoints are critical in some medical diagnoses, and keypoints invariability
after de-identification is of great importance. Here, we propose a solution
using deepfake technology, the face swapping technique. While this swapping
method has been criticized for invading privacy and portraiture right, it could
conversely protect privacy in medical video: patients' faces could be swapped
to a proper target face and become unrecognizable. However, it remained an open
question that to what extent the swapping de-identification method could affect
the automatic detection of body keypoints. In this study, we apply deepfake
technology to Parkinson's disease examination videos to de-identify subjects,
and quantitatively show that: face-swapping as a de-identification approach is
reliable, and it keeps the keypoints almost invariant, significantly better
than traditional methods. This study proposes a pipeline for video
de-identification and keypoint preservation, clearing up some ethical
restrictions for medical data sharing. This work could make open-source high
quality medical video datasets more feasible and promote future medical
research that benefits our society.
Related papers
- Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - OpticalDR: A Deep Optical Imaging Model for Privacy-Protective
Depression Recognition [66.91236298878383]
Depression Recognition (DR) poses a considerable challenge, especially in the context of privacy concerns.
We design a new imaging system to erase the identity information of captured facial images while retain disease-relevant features.
It is irreversible for identity information recovery while preserving essential disease-related characteristics necessary for accurate DR.
arXiv Detail & Related papers (2024-02-29T01:20:29Z) - Privacy Protection in MRI Scans Using 3D Masked Autoencoders [2.463789441707266]
Data anonymization and de-identification is concerned with ensuring the privacy and confidentiality of individuals' personal information.
We propose CP-MAE, a model that de-identifies the face by remodeling it.
With our method we are able to synthesize high-fidelity scans of resolution up to $2563$ -- compared to $1283$ with previous approaches.
arXiv Detail & Related papers (2023-10-24T12:25:37Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Practical Digital Disguises: Leveraging Face Swaps to Protect Patient
Privacy [1.7249222048792818]
Face swapping for privacy protection has emerged as an active area of research.
Our main contribution is a novel end-to-end face swapping pipeline for recorded videos of standardized assessments of autism symptoms in children.
arXiv Detail & Related papers (2022-04-07T16:34:15Z) - Conditional De-Identification of 3D Magnetic Resonance Images [29.075173293529947]
We propose a new class of de-identification techniques that, instead of removing facial features, remodels them.
We demonstrate that our approach preserves privacy far better than existing techniques, without compromising downstream medical analyses.
arXiv Detail & Related papers (2021-10-18T15:19:35Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.