GFFE: G-buffer Free Frame Extrapolation for Low-latency Real-time   Rendering
        - URL: http://arxiv.org/abs/2406.18551v1
 - Date: Thu, 23 May 2024 18:35:26 GMT
 - Title: GFFE: G-buffer Free Frame Extrapolation for Low-latency Real-time   Rendering
 - Authors: Songyin Wu, Deepak Vembar, Anton Sochenov, Selvakumar Panneer, Sungye Kim, Anton Kaplanyan, Ling-Qi Yan, 
 - Abstract summary: We propose GFFE, with a novel framework and an efficient neural network, to generate new frames in real-time without introducing additional latency.
We analyze the motion of dynamic fragments and different types of disocclusions, and design the corresponding modules.
After filling disocclusions, a light-weight shading correction network is used to correct shading and improve overall quality.
 - Score: 14.496161390319065
 - License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
 - Abstract:   Real-time rendering has been embracing ever-demanding effects, such as ray tracing. However, rendering such effects in high resolution and high frame rate remains challenging. Frame extrapolation methods, which don't introduce additional latency as opposed to frame interpolation methods such as DLSS 3 and FSR 3, boost the frame rate by generating future frames based on previous frames. However, it is a more challenging task because of the lack of information in the disocclusion regions, and recent methods also have a high engine integration cost due to requiring G-buffers as input. We propose a \emph{G-buffer free} frame extrapolation, GFFE, with a novel heuristic framework and an efficient neural network, to plausibly generate new frames in real-time without introducing additional latency. We analyze the motion of dynamic fragments and different types of disocclusions, and design the corresponding modules of the extrapolation block to handle them. After filling disocclusions, a light-weight shading correction network is used to correct shading and improve overall quality. GFFE achieves comparable or better results compared to previous interpolation as well as G-buffer-dependent extrapolation methods, with more efficient performance and easier game integration. 
 
       
      
        Related papers
        - ReFrame: Layer Caching for Accelerated Inference in Real-Time Rendering [11.260625620980553]
ReFrame explores different caching policies to optimize trade-offs between quality and performance in rendering workloads.<n>We achieve 1.4x speedup on average with negligible quality loss in three real-time rendering tasks.
arXiv  Detail & Related papers  (2025-06-14T20:17:43Z) - Generative Inbetweening through Frame-wise Conditions-Driven Video   Generation [63.43583844248389]
generative inbetweening aims to generate intermediate frame sequences by utilizing two key frames as input.
We propose a Frame-wise Conditions-driven Video Generation (FCVG) method that significantly enhances the temporal stability of interpolated video frames.
Our FCVG demonstrates the capability to generate temporally stable videos using both linear and non-linear curves.
arXiv  Detail & Related papers  (2024-12-16T13:19:41Z) - PatchEX: High-Quality Real-Time Temporal Supersampling through   Patch-based Parallel Extrapolation [0.4143603294943439]
This paper introduces PatchEX, a novel frame extrapolation method that aims to provide the quality of at the speed of extrapolation.
PatchEX achieves a 65.29% and 48.46% improvement in PSNR over the latest extrapolation methods ExtraNet and ExtraSS, respectively.
arXiv  Detail & Related papers  (2024-07-05T13:59:05Z) - Ada-VE: Training-Free Consistent Video Editing Using Adaptive Motion   Prior [13.595032265551184]
Video-to-video synthesis poses significant challenges in maintaining character consistency, smooth temporal transitions, and preserving visual quality during fast motion.
We propose an adaptive motion-guided cross-frame attention mechanism that selectively reduces redundant computations.
This enables a greater number of cross-frame attentions over more frames within the same computational budget.
arXiv  Detail & Related papers  (2024-06-07T12:12:25Z) - Dynamic Frame Interpolation in Wavelet Domain [57.25341639095404]
Video frame is an important low-level computation vision task, which can increase frame rate for more fluent visual experience.
Existing methods have achieved great success by employing advanced motion models and synthesis networks.
WaveletVFI can reduce computation up to 40% while maintaining similar accuracy, making it perform more efficiently against other state-of-the-arts.
arXiv  Detail & Related papers  (2023-09-07T06:41:15Z) - RIGID: Recurrent GAN Inversion and Editing of Real Face Videos [73.97520691413006]
GAN inversion is indispensable for applying the powerful editability of GAN to real images.
Existing methods invert video frames individually often leading to undesired inconsistent results over time.
We propose a unified recurrent framework, named textbfRecurrent vtextbfIdeo textbfGAN textbfInversion and etextbfDiting (RIGID)
Our framework learns the inherent coherence between input frames in an end-to-end manner.
arXiv  Detail & Related papers  (2023-08-11T12:17:24Z) - ExWarp: Extrapolation and Warping-based Temporal Supersampling for
  High-frequency Displays [0.7734726150561089]
High-frequency displays are gaining immense popularity because of their increasing use in video games and virtual reality applications.
This paper proposes increasing the frame rate to provide a smooth experience on modern displays by predicting new frames based on past or future frames.
arXiv  Detail & Related papers  (2023-07-24T08:32:27Z) - You Can Ground Earlier than See: An Effective and Efficient Pipeline for
  Temporal Sentence Grounding in Compressed Videos [56.676761067861236]
Given an untrimmed video, temporal sentence grounding aims to locate a target moment semantically according to a sentence query.
Previous respectable works have made decent success, but they only focus on high-level visual features extracted from decoded frames.
We propose a new setting, compressed-domain TSG, which directly utilizes compressed videos rather than fully-decompressed frames as the visual input.
arXiv  Detail & Related papers  (2023-03-14T12:53:27Z) - Exploring Motion Ambiguity and Alignment for High-Quality Video Frame
  Interpolation [46.02120172459727]
We propose to relax the requirement of reconstructing an intermediate frame as close to the ground-truth (GT) as possible.
We develop a texture consistency loss (TCL) upon the assumption that the interpolated content should maintain similar structures with their counterparts in the given frames.
arXiv  Detail & Related papers  (2022-03-19T10:37:06Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv  Detail & Related papers  (2021-06-14T10:33:47Z) - FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation [97.99012124785177]
FLAVR is a flexible and efficient architecture that uses 3D space-time convolutions to enable end-to-end learning and inference for video framesupervised.
We demonstrate that FLAVR can serve as a useful self- pretext task for action recognition, optical flow estimation, and motion magnification.
arXiv  Detail & Related papers  (2020-12-15T18:59:30Z) - All at Once: Temporally Adaptive Multi-Frame Interpolation with Advanced
  Motion Modeling [52.425236515695914]
State-of-the-art methods are iterative solutions interpolating one frame at the time.
This work introduces a true multi-frame interpolator.
It utilizes a pyramidal style network in the temporal domain to complete the multi-frame task in one-shot.
arXiv  Detail & Related papers  (2020-07-23T02:34:39Z) - Deep Space-Time Video Upsampling Networks [47.62807427163614]
Video super-resolution (VSR) and frame (FI) are traditional computer vision problems.
We propose an end-to-end framework for the space-time video upsampling by efficiently merging VSR and FI into a joint framework.
Results show better results both quantitatively and qualitatively, while reducing the time (x7 faster) and the number of parameters (30%) compared to baselines.
arXiv  Detail & Related papers  (2020-04-06T07:04:21Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.