Scaling Neural Face Synthesis to High FPS and Low Latency by Neural
Caching
- URL: http://arxiv.org/abs/2211.05773v1
- Date: Thu, 10 Nov 2022 18:58:00 GMT
- Title: Scaling Neural Face Synthesis to High FPS and Low Latency by Neural
Caching
- Authors: Frank Yu, Sid Fels, Helge Rhodin
- Abstract summary: Recent neural rendering approaches greatly improve image quality, reaching near photorealism.
The underlying neural networks have high runtime, precluding telepresence and virtual reality applications that require high resolution at low latency.
We break this dependency by caching information from the previous frame to speed up the processing of the current one with an implicit warp.
We test the approach on view-dependent rendering of 3D portrait avatars, as needed for telepresence, on established benchmark sequences.
- Score: 12.362614824541824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent neural rendering approaches greatly improve image quality, reaching
near photorealism. However, the underlying neural networks have high runtime,
precluding telepresence and virtual reality applications that require high
resolution at low latency. The sequential dependency of layers in deep networks
makes their optimization difficult. We break this dependency by caching
information from the previous frame to speed up the processing of the current
one with an implicit warp. The warping with a shallow network reduces latency
and the caching operations can further be parallelized to improve the frame
rate. In contrast to existing temporal neural networks, ours is tailored for
the task of rendering novel views of faces by conditioning on the change of the
underlying surface mesh. We test the approach on view-dependent rendering of 3D
portrait avatars, as needed for telepresence, on established benchmark
sequences. Warping reduces latency by 70$\%$ (from 49.4ms to 14.9ms on
commodity GPUs) and scales frame rates accordingly over multiple GPUs while
reducing image quality by only 1$\%$, making it suitable as part of end-to-end
view-dependent 3D teleconferencing applications. Our project page can be found
at: https://yu-frank.github.io/lowlatency/.
Related papers
- Low Latency Point Cloud Rendering with Learned Splatting [24.553459204476432]
High-quality rendering of point clouds is challenging because of the point sparsity and irregularity.
Existing rendering solutions lack in either quality or speed.
We present a framework that unlocks interactive, free-viewing and high-fidelity point cloud rendering.
arXiv Detail & Related papers (2024-09-24T23:26:07Z) - NARVis: Neural Accelerated Rendering for Real-Time Scientific Point Cloud Visualization [15.7907024889244]
This work introduces a novel - Neural Accelerated Renderer (NAR)
NAR uses the neural deferred rendering framework to visualize large-scale scientific point cloud data.
We achieve competitive frame rates of $>$ 126 fps for interactive rendering of 350M points.
arXiv Detail & Related papers (2024-07-26T21:21:13Z) - D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur [68.24599239479326]
We develop a hybrid neural rendering model that makes image-based representation and neural 3D representation join forces to render high-quality, view-consistent images.
Our model surpasses state-of-the-art point-based methods for novel view synthesis.
arXiv Detail & Related papers (2023-04-25T08:36:33Z) - HQ3DAvatar: High Quality Controllable 3D Head Avatar [65.70885416855782]
This paper presents a novel approach to building highly photorealistic digital head avatars.
Our method learns a canonical space via an implicit function parameterized by a neural network.
At test time, our method is driven by a monocular RGB video.
arXiv Detail & Related papers (2023-03-25T13:56:33Z) - Efficient Meshy Neural Fields for Animatable Human Avatars [87.68529918184494]
Efficiently digitizing high-fidelity animatable human avatars from videos is a challenging and active research topic.
Recent rendering-based neural representations open a new way for human digitization with their friendly usability and photo-varying reconstruction quality.
We present EMA, a method that Efficiently learns Meshy neural fields to reconstruct animatable human Avatars.
arXiv Detail & Related papers (2023-03-23T00:15:34Z) - Real-Time Neural Light Field on Mobile Devices [54.44982318758239]
We introduce a novel network architecture that runs efficiently on mobile devices with low latency and small size.
Our model achieves high-resolution generation while maintaining real-time inference for both synthetic and real-world scenes.
arXiv Detail & Related papers (2022-12-15T18:58:56Z) - SteerNeRF: Accelerating NeRF Rendering via Smooth Viewpoint Trajectory [20.798605661240355]
We propose a new way to speed up rendering using 2D neural networks.
A low-resolution feature map is rendered first by volume rendering, then a lightweight 2D neural is applied to generate the image at target resolution.
We show that the proposed method can achieve competitive rendering quality while reducing the rendering time with little memory overhead, enabling 30FPS at 1080P image resolution with a low memory footprint.
arXiv Detail & Related papers (2022-12-15T00:02:36Z) - Real-time Neural Radiance Caching for Path Tracing [67.46991813306708]
We present a real-time neural radiance caching method for path-traced global illumination.
Our system is designed to handle fully dynamic scenes, and makes no assumptions about the lighting, geometry, and materials.
We demonstrate significant noise reduction at the cost of little induced bias, and report state-of-the-art, real-time performance on a number of challenging scenarios.
arXiv Detail & Related papers (2021-06-23T13:09:58Z) - Neural Lumigraph Rendering [33.676795978166375]
State-of-the-art (SOTA) neural volume rendering approaches are slow to train and require minutes of inference (i.e., rendering) time for high image resolutions.
We adopt high-capacity neural scene representations with periodic activations for jointly optimizing an implicit surface and a radiance field of a scene supervised exclusively with posed 2D images.
Our neural rendering pipeline accelerates SOTA neural volume rendering by about two orders of magnitude and our implicit surface representation is unique in allowing us to export a mesh with view-dependent texture information.
arXiv Detail & Related papers (2021-03-22T03:46:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.