ExWarp: Extrapolation and Warping-based Temporal Supersampling for
High-frequency Displays
- URL: http://arxiv.org/abs/2307.12607v1
- Date: Mon, 24 Jul 2023 08:32:27 GMT
- Title: ExWarp: Extrapolation and Warping-based Temporal Supersampling for
High-frequency Displays
- Authors: Akanksha Dixit, Yashashwee Chakrabarty, Smruti R. Sarangi
- Abstract summary: High-frequency displays are gaining immense popularity because of their increasing use in video games and virtual reality applications.
This paper proposes increasing the frame rate to provide a smooth experience on modern displays by predicting new frames based on past or future frames.
- Score: 0.7734726150561089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-frequency displays are gaining immense popularity because of their
increasing use in video games and virtual reality applications. However, the
issue is that the underlying GPUs cannot continuously generate frames at this
high rate -- this results in a less smooth and responsive experience.
Furthermore, if the frame rate is not synchronized with the refresh rate, the
user may experience screen tearing and stuttering. Previous works propose
increasing the frame rate to provide a smooth experience on modern displays by
predicting new frames based on past or future frames. Interpolation and
extrapolation are two widely used algorithms that predict new frames.
Interpolation requires waiting for the future frame to make a prediction, which
adds additional latency. On the other hand, extrapolation provides a better
quality of experience because it relies solely on past frames -- it does not
incur any additional latency. The simplest method to extrapolate a frame is to
warp the previous frame using motion vectors; however, the warped frame may
contain improperly rendered visual artifacts due to dynamic objects -- this
makes it very challenging to design such a scheme. Past work has used DNNs to
get good accuracy, however, these approaches are slow. This paper proposes
Exwarp -- an approach based on reinforcement learning (RL) to intelligently
choose between the slower DNN-based extrapolation and faster warping-based
methods to increase the frame rate by 4x with an almost negligible reduction in
the perceived image quality.
Related papers
- PatchEX: High-Quality Real-Time Temporal Supersampling through Patch-based Parallel Extrapolation [0.4143603294943439]
This paper introduces PatchEX, a novel frame extrapolation method that aims to provide the quality of at the speed of extrapolation.
PatchEX achieves a 65.29% and 48.46% improvement in PSNR over the latest extrapolation methods ExtraNet and ExtraSS, respectively.
arXiv Detail & Related papers (2024-07-05T13:59:05Z) - GFFE: G-buffer Free Frame Extrapolation for Low-latency Real-time Rendering [14.496161390319065]
We propose GFFE, with a novel framework and an efficient neural network, to generate new frames in real-time without introducing additional latency.
We analyze the motion of dynamic fragments and different types of disocclusions, and design the corresponding modules.
After filling disocclusions, a light-weight shading correction network is used to correct shading and improve overall quality.
arXiv Detail & Related papers (2024-05-23T18:35:26Z) - You Can Ground Earlier than See: An Effective and Efficient Pipeline for
Temporal Sentence Grounding in Compressed Videos [56.676761067861236]
Given an untrimmed video, temporal sentence grounding aims to locate a target moment semantically according to a sentence query.
Previous respectable works have made decent success, but they only focus on high-level visual features extracted from decoded frames.
We propose a new setting, compressed-domain TSG, which directly utilizes compressed videos rather than fully-decompressed frames as the visual input.
arXiv Detail & Related papers (2023-03-14T12:53:27Z) - Scaling Neural Face Synthesis to High FPS and Low Latency by Neural
Caching [12.362614824541824]
Recent neural rendering approaches greatly improve image quality, reaching near photorealism.
The underlying neural networks have high runtime, precluding telepresence and virtual reality applications that require high resolution at low latency.
We break this dependency by caching information from the previous frame to speed up the processing of the current one with an implicit warp.
We test the approach on view-dependent rendering of 3D portrait avatars, as needed for telepresence, on established benchmark sequences.
arXiv Detail & Related papers (2022-11-10T18:58:00Z) - TTVFI: Learning Trajectory-Aware Transformer for Video Frame
Interpolation [50.49396123016185]
Video frame (VFI) aims to synthesize an intermediate frame between two consecutive frames.
We propose a novel Trajectory-aware Transformer for Video Frame Interpolation (TTVFI)
Our method outperforms other state-of-the-art methods in four widely-used VFI benchmarks.
arXiv Detail & Related papers (2022-07-19T03:37:49Z) - Video Frame Interpolation without Temporal Priors [91.04877640089053]
Video frame aims to synthesize non-exist intermediate frames in a video sequence.
The temporal priors of videos, i.e. frames per second (FPS) and frame exposure time, may vary from different camera sensors.
We devise a novel optical flow refinement strategy for better synthesizing results.
arXiv Detail & Related papers (2021-12-02T12:13:56Z) - FREGAN : an application of generative adversarial networks in enhancing
the frame rate of videos [1.1688030627514534]
FREGAN (Frame Rate Enhancement Generative Adversarial Network) model has been proposed, which predicts future frames of a video sequence based on a sequence of past frames.
We have validated the effectiveness of the proposed model on the standard datasets.
The experimental outcomes illustrate that the proposed model has a Peak signal-to-noise ratio (PSNR) of 34.94 and a Structural Similarity Index (SSIM) of 0.95.
arXiv Detail & Related papers (2021-11-01T17:19:00Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Prediction-assistant Frame Super-Resolution for Video Streaming [40.60863957681011]
We propose to enhance video quality using lossy frames in two situations.
For the first case, we propose a small yet effective video frame prediction network.
For the second case, we improve the video prediction network to associate current frames as well as previous frames to restore high-quality images.
arXiv Detail & Related papers (2021-03-17T06:05:27Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - All at Once: Temporally Adaptive Multi-Frame Interpolation with Advanced
Motion Modeling [52.425236515695914]
State-of-the-art methods are iterative solutions interpolating one frame at the time.
This work introduces a true multi-frame interpolator.
It utilizes a pyramidal style network in the temporal domain to complete the multi-frame task in one-shot.
arXiv Detail & Related papers (2020-07-23T02:34:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.