Amplifying The Uncanny
- URL: http://arxiv.org/abs/2002.06890v3
- Date: Fri, 13 Nov 2020 13:18:10 GMT
- Title: Amplifying The Uncanny
- Authors: Terence Broad, Frederic Fol Leymarie, Mick Grierson
- Abstract summary: Deep neural networks have become remarkably good at producing realistic deepfakes.
Deepfakes are produced by algorithms that learn to distinguish between real and fake images.
This paper explores the aesthetic outcome of inverting this process, instead optimising the system to generate images that it predicts as being fake.
- Score: 0.2062593640149624
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep neural networks have become remarkably good at producing realistic
deepfakes, images of people that (to the untrained eye) are indistinguishable
from real images. Deepfakes are produced by algorithms that learn to
distinguish between real and fake images and are optimised to generate samples
that the system deems realistic. This paper, and the resulting series of
artworks Being Foiled explore the aesthetic outcome of inverting this process,
instead optimising the system to generate images that it predicts as being
fake. This maximises the unlikelihood of the data and in turn, amplifies the
uncanny nature of these machine hallucinations.
Related papers
- Solutions to Deepfakes: Can Camera Hardware, Cryptography, and Deep Learning Verify Real Images? [51.3344199560726]
It is imperative to establish methods that can separate real data from synthetic data with high confidence.
This document aims to: present known strategies in detection and cryptography that can be employed to verify which images are real.
arXiv Detail & Related papers (2024-07-04T22:01:21Z) - Importance of realism in procedurally-generated synthetic images for deep learning: case studies in maize and canola [1.7532822703595772]
Procedural models of plants can be created to produce visually realistic simulations.
These synthetic images can either augment or completely replace real images in training neural networks for phenotyping tasks.
In this paper, we systematically vary amounts of real and synthetic images used for training in both maize and canola.
arXiv Detail & Related papers (2024-04-08T01:08:41Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - Identifying Invariant Texture Violation for Robust Deepfake Detection [17.306386179823576]
We propose the Invariant Texture Learning framework, which only accesses the published dataset with low visual quality.
Our method is based on the prior that the microscopic facial texture of the source face is inevitably violated by the texture transferred from the target person.
arXiv Detail & Related papers (2020-12-19T03:02:15Z) - Fighting Deepfake by Exposing the Convolutional Traces on Images [0.0]
Mobile apps like FACEAPP make use of the most advanced Generative Adversarial Networks (GAN) to produce extreme transformations on human face photos.
This kind of media object took the name of Deepfake and raised a new challenge in the multimedia forensics field: the Deepfake detection challenge.
In this paper, a new approach aimed to extract a Deepfake fingerprint from images is proposed.
arXiv Detail & Related papers (2020-08-07T08:49:23Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement [78.58603635621591]
Training an unpaired synthetic-to-real translation network in image space is severely under-constrained.
We propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image.
Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets.
arXiv Detail & Related papers (2020-03-27T21:45:41Z) - Learning Inverse Rendering of Faces from Real-world Videos [52.313931830408386]
Existing methods decompose a face image into three components (albedo, normal, and illumination) by supervised training on synthetic data.
We propose a weakly supervised training approach to train our model on real face videos, based on the assumption of consistency of albedo and normal.
Our network is trained on both real and synthetic data, benefiting from both.
arXiv Detail & Related papers (2020-03-26T17:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.