ImplicitDeepfake: Plausible Face-Swapping through Implicit Deepfake
Generation using NeRF and Gaussian Splatting
- URL: http://arxiv.org/abs/2402.06390v1
- Date: Fri, 9 Feb 2024 13:11:57 GMT
- Title: ImplicitDeepfake: Plausible Face-Swapping through Implicit Deepfake
Generation using NeRF and Gaussian Splatting
- Authors: Georgii Stanishevskii, Jakub Steczkiewicz, Tomasz Szczepanik,
S{\l}awomir Tadeja, Jacek Tabor, Przemys{\l}aw Spurek
- Abstract summary: We show how to combine NeRFs and GS to produce plausible 3D deepfake-based avatars.
Deepfake can offer a next-generation solution for avatar creation and gaming when of desirable quality.
- Score: 10.991274404360194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Numerous emerging deep-learning techniques have had a substantial impact on
computer graphics. Among the most promising breakthroughs are the recent rise
of Neural Radiance Fields (NeRFs) and Gaussian Splatting (GS). NeRFs encode the
object's shape and color in neural network weights using a handful of images
with known camera positions to generate novel views. In contrast, GS provides
accelerated training and inference without a decrease in rendering quality by
encoding the object's characteristics in a collection of Gaussian
distributions. These two techniques have found many use cases in spatial
computing and other domains. On the other hand, the emergence of deepfake
methods has sparked considerable controversy. Such techniques can have a form
of artificial intelligence-generated videos that closely mimic authentic
footage. Using generative models, they can modify facial features, enabling the
creation of altered identities or facial expressions that exhibit a remarkably
realistic appearance to a real person. Despite these controversies, deepfake
can offer a next-generation solution for avatar creation and gaming when of
desirable quality. To that end, we show how to combine all these emerging
technologies to obtain a more plausible outcome. Our ImplicitDeepfake1 uses the
classical deepfake algorithm to modify all training images separately and then
train NeRF and GS on modified faces. Such relatively simple strategies can
produce plausible 3D deepfake-based avatars.
Related papers
- Generalizable and Animatable Gaussian Head Avatar [50.34788590904843]
We propose Generalizable and Animatable Gaussian head Avatar (GAGAvatar) for one-shot animatable head avatar reconstruction.
We generate the parameters of 3D Gaussians from a single image in a single forward pass.
Our method exhibits superior performance compared to previous methods in terms of reconstruction quality and expression accuracy.
arXiv Detail & Related papers (2024-10-10T14:29:00Z) - Solutions to Deepfakes: Can Camera Hardware, Cryptography, and Deep Learning Verify Real Images? [51.3344199560726]
It is imperative to establish methods that can separate real data from synthetic data with high confidence.
This document aims to: present known strategies in detection and cryptography that can be employed to verify which images are real.
arXiv Detail & Related papers (2024-07-04T22:01:21Z) - Closing the Visual Sim-to-Real Gap with Object-Composable NeRFs [59.12526668734703]
We introduce Composable Object Volume NeRF (COV-NeRF), an object-composable NeRF model that is the centerpiece of a real-to-sim pipeline.
COV-NeRF extracts objects from real images and composes them into new scenes, generating photorealistic renderings and many types of 2D and 3D supervision.
arXiv Detail & Related papers (2024-03-07T00:00:02Z) - Text-image guided Diffusion Model for generating Deepfake celebrity
interactions [50.37578424163951]
Diffusion models have recently demonstrated highly realistic visual content generation.
This paper devises and explores a novel method in that regard.
Our results show that with the devised scheme, it is possible to create fake visual content with alarming realism.
arXiv Detail & Related papers (2023-09-26T08:24:37Z) - 3DMM-RF: Convolutional Radiance Fields for 3D Face Modeling [111.98096975078158]
We introduce a style-based generative network that synthesizes in one pass all and only the required rendering samples of a neural radiance field.
We show that this model can accurately be fit to "in-the-wild" facial images of arbitrary pose and illumination, extract the facial characteristics, and be used to re-render the face in controllable conditions.
arXiv Detail & Related papers (2022-09-15T15:28:45Z) - Copy Motion From One to Another: Fake Motion Video Generation [53.676020148034034]
A compelling application of artificial intelligence is to generate a video of a target person performing arbitrary desired motion.
Current methods typically employ GANs with a L2 loss to assess the authenticity of the generated videos.
We propose a theoretically motivated Gromov-Wasserstein loss that facilitates learning the mapping from a pose to a foreground image.
Our method is able to generate realistic target person videos, faithfully copying complex motions from a source person.
arXiv Detail & Related papers (2022-05-03T08:45:22Z) - Detecting High-Quality GAN-Generated Face Images using Neural Networks [23.388645531702597]
We propose a new strategy to differentiate GAN-generated images from authentic images by leveraging spectral band discrepancies.
In particular, we enable the digital preservation of face images using the Cross-band co-occurrence matrix and spatial co-occurrence matrix.
We show that the performance boost is particularly significant and achieves more than 92% in different post-processing environments.
arXiv Detail & Related papers (2022-03-03T13:53:27Z) - DA-FDFtNet: Dual Attention Fake Detection Fine-tuning Network to Detect
Various AI-Generated Fake Images [21.030153777110026]
It has been much easier to create fake images such as "Deepfakes"
Recent research has introduced few-shot learning, which uses a small amount of training data to produce fake images and videos more effectively.
In this work, we propose Dual Attention Fine-tuning Network (DA-tNet) to detect the manipulated fake face images.
arXiv Detail & Related papers (2021-12-22T16:25:24Z) - FakeTransformer: Exposing Face Forgery From Spatial-Temporal
Representation Modeled By Facial Pixel Variations [8.194624568473126]
Face forgery can attack any target, which poses a new threat to personal privacy and property security.
Inspired by the fact that the spatial coherence and temporal consistency of physiological signal are destroyed in the generated content, we attempt to find inconsistent patterns that can distinguish between real videos and synthetic videos.
arXiv Detail & Related papers (2021-11-15T08:44:52Z) - Fighting Deepfake by Exposing the Convolutional Traces on Images [0.0]
Mobile apps like FACEAPP make use of the most advanced Generative Adversarial Networks (GAN) to produce extreme transformations on human face photos.
This kind of media object took the name of Deepfake and raised a new challenge in the multimedia forensics field: the Deepfake detection challenge.
In this paper, a new approach aimed to extract a Deepfake fingerprint from images is proposed.
arXiv Detail & Related papers (2020-08-07T08:49:23Z) - Defending against GAN-based Deepfake Attacks via Transformation-aware
Adversarial Faces [36.87244915810356]
Deepfake represents a category of face-swapping attacks that leverage machine learning models.
We propose to use novel transformation-aware adversarially perturbed faces as a defense against Deepfake attacks.
We also propose to use an ensemble-based approach to enhance the defense robustness against GAN-based Deepfake variants.
arXiv Detail & Related papers (2020-06-12T18:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.