Neural Alignment for Face De-pixelization
- URL: http://arxiv.org/abs/2009.13856v1
- Date: Tue, 29 Sep 2020 08:29:15 GMT
- Title: Neural Alignment for Face De-pixelization
- Authors: Maayan Shuvi, Noa Fish, Kfir Aberman, Ariel Shamir, Daniel Cohen-Or
- Abstract summary: We present a simple method to reconstruct a high-resolution video from a face-video, where the identity of a person is obscured by pixelization.
We show in our experiments that a fairly good approximation of the original video can be reconstructed in a way that compromises anonymity.
- Score: 46.57077539961045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a simple method to reconstruct a high-resolution video from a
face-video, where the identity of a person is obscured by pixelization. This
concealment method is popular because the viewer can still perceive a human
face figure and the overall head motion. However, we show in our experiments
that a fairly good approximation of the original video can be reconstructed in
a way that compromises anonymity. Our system exploits the simultaneous
similarity and small disparity between close-by video frames depicting a human
face, and employs a spatial transformation component that learns the alignment
between the pixelated frames. Each frame, supported by its aligned surrounding
frames, is first encoded, then decoded to a higher resolution. Reconstruction
and perceptual losses promote adherence to the ground-truth, and an adversarial
loss assists in maintaining domain faithfulness. There is no need for explicit
temporal coherency loss as it is maintained implicitly by the alignment of
neighboring frames and reconstruction. Although simple, our framework
synthesizes high-quality face reconstructions, demonstrating that given the
statistical prior of a human face, multiple aligned pixelated frames contain
sufficient information to reconstruct a high-quality approximation of the
original signal.
Related papers
- Kalman-Inspired Feature Propagation for Video Face Super-Resolution [78.84881180336744]
We introduce a novel framework to maintain a stable face prior to time.
The Kalman filtering principles offer our method a recurrent ability to use the information from previously restored frames to guide and regulate the restoration process of the current frame.
Experiments demonstrate the effectiveness of our method in capturing facial details consistently across video frames.
arXiv Detail & Related papers (2024-08-09T17:57:12Z) - Beyond Alignment: Blind Video Face Restoration via Parsing-Guided Temporal-Coherent Transformer [21.323165895036354]
We propose the first blind video face restoration approach with a novel parsing-guided temporal-coherent transformer (PGTFormer) without pre-alignment.
Specifically, we pre-train a temporal-spatial vector quantized auto-encoder on high-quality video face datasets to extract expressive context-rich priors.
This strategy reduces artifacts and mitigates jitter caused by cumulative errors from face pre-alignment.
arXiv Detail & Related papers (2024-04-21T12:33:07Z) - Parametric Reshaping of Portraits in Videos [24.428095383264456]
We present a robust and easy-to-use parametric method to reshape the portrait in a video to produce smooth retouched results.
Given an input portrait video, our method consists of two main stages: stabilized face reconstruction, and continuous video reshaping.
In the second stage, we first reshape the reconstructed 3D face using a parametric reshaping model reflecting the weight change of the face, and then utilize the reshaped 3D face to guide the warping of video frames.
arXiv Detail & Related papers (2022-05-05T09:55:16Z) - SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video [48.23424267130425]
SelfRecon recovers space-time coherent geometries from a monocular self-rotating human video.
Explicit methods require a predefined template mesh for a given sequence, while the template is hard to acquire for a specific subject.
Implicit methods support arbitrary topology and have high quality due to continuous geometric representation.
arXiv Detail & Related papers (2022-01-30T11:49:29Z) - UniFaceGAN: A Unified Framework for Temporally Consistent Facial Video
Editing [78.26925404508994]
We propose a unified temporally consistent facial video editing framework termed UniFaceGAN.
Our framework is designed to handle face swapping and face reenactment simultaneously.
Compared with the state-of-the-art facial image editing methods, our framework generates video portraits that are more photo-realistic and temporally smooth.
arXiv Detail & Related papers (2021-08-12T10:35:22Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - Consistent Video Depth Estimation [57.712779457632024]
We present an algorithm for reconstructing dense, geometrically consistent depth for all pixels in a monocular video.
We leverage a conventional structure-from-motion reconstruction to establish geometric constraints on pixels in the video.
Our algorithm is able to handle challenging hand-held captured input videos with a moderate degree of dynamic motion.
arXiv Detail & Related papers (2020-04-30T17:59:26Z) - Exploiting Semantics for Face Image Deblurring [121.44928934662063]
We propose an effective and efficient face deblurring algorithm by exploiting semantic cues via deep convolutional neural networks.
We incorporate face semantic labels as input priors and propose an adaptive structural loss to regularize facial local structures.
The proposed method restores sharp images with more accurate facial features and details.
arXiv Detail & Related papers (2020-01-19T13:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.