3D Holistic OR Anonymization
- URL: http://arxiv.org/abs/2405.05261v1
- Date: Mon, 18 Mar 2024 23:32:02 GMT
- Title: 3D Holistic OR Anonymization
- Authors: Tony Danjun Wang,
- Abstract summary: We propose a novel method to automatically anonymize multi-view RGB-D video recordings of operating rooms (OR)
Our anonymization method preserves the original data distribution by replacing the faces in each image with different faces.
In contrast to established anonymization methods, our approach localizes faces in 3D space first rather than in 2D space.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a novel method that leverages 3D information to automatically anonymize multi-view RGB-D video recordings of operating rooms (OR). Our anonymization method preserves the original data distribution by replacing the faces in each image with different faces so that the data remains suitable for further downstream tasks. In contrast to established anonymization methods, our approach localizes faces in 3D space first rather than in 2D space. Each face is then anonymized by reprojecting a different face back into each camera view, ultimately replacing the original faces in the resulting images. Furthermore, we introduce a multi-view RGB-D dataset, captured during a real operation of experienced surgeons performing laparoscopic surgery on an animal object (swine), which encapsulates typical characteristics of ORs. Finally, we present experimental results evaluated on that dataset, showing that leveraging 3D data can achieve better face localization in OR images and generate more realistic faces than the current state-of-the-art. There has been, to our knowledge, no prior work that addresses the anonymization of multi-view OR recordings, nor 2D face localization that leverages 3D information.
Related papers
- FitDiff: Robust monocular 3D facial shape and reflectance estimation using Diffusion Models [79.65289816077629]
We present FitDiff, a diffusion-based 3D facial avatar generative model.
Our model accurately generates relightable facial avatars, utilizing an identity embedding extracted from an "in-the-wild" 2D facial image.
Being the first 3D LDM conditioned on face recognition embeddings, FitDiff reconstructs relightable human avatars, that can be used as-is in common rendering engines.
arXiv Detail & Related papers (2023-12-07T17:35:49Z) - DisguisOR: Holistic Face Anonymization for the Operating Room [43.68679886516574]
Existing automated 2D anonymization methods under-perform in Operating Rooms.
We propose to anonymize multi-view OR recordings using 3D data from multiple camera streams.
arXiv Detail & Related papers (2023-07-26T15:10:54Z) - Generating 2D and 3D Master Faces for Dictionary Attacks with a
Network-Assisted Latent Space Evolution [68.8204255655161]
A master face is a face image that passes face-based identity authentication for a high percentage of the population.
We optimize these faces for 2D and 3D face verification models.
In 3D, we generate faces using the 2D StyleGAN2 generator and predict a 3D structure using a deep 3D face reconstruction network.
arXiv Detail & Related papers (2022-11-25T09:15:38Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Image-to-Video Generation via 3D Facial Dynamics [78.01476554323179]
We present a versatile model, FaceAnime, for various video generation tasks from still images.
Our model is versatile for various AR/VR and entertainment applications, such as face video and face video prediction.
arXiv Detail & Related papers (2021-05-31T02:30:11Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - DeepFaceFlow: In-the-wild Dense 3D Facial Motion Estimation [56.56575063461169]
DeepFaceFlow is a robust, fast, and highly-accurate framework for the estimation of 3D non-rigid facial flow.
Our framework was trained and tested on two very large-scale facial video datasets.
Given registered pairs of images, our framework generates 3D flow maps at 60 fps.
arXiv Detail & Related papers (2020-05-14T23:56:48Z) - Generating Thermal Image Data Samples using 3D Facial Modelling
Techniques and Deep Learning Methodologies [0.40611352512781856]
We have used datasets for generating 3D varying face poses by using a single frontal face pose.
The refined outputs have better contrast adjustments, decreased noise level and higher exposedness of the dark regions.
In the next phase of the proposed study, the refined version of images is used to create 3D facial geometry structures.
arXiv Detail & Related papers (2020-05-05T02:55:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.