FREGAN : an application of generative adversarial networks in enhancing
the frame rate of videos
- URL: http://arxiv.org/abs/2111.01105v1
- Date: Mon, 1 Nov 2021 17:19:00 GMT
- Title: FREGAN : an application of generative adversarial networks in enhancing
the frame rate of videos
- Authors: Rishik Mishra, Neeraj Gupta, Nitya Shukla
- Abstract summary: FREGAN (Frame Rate Enhancement Generative Adversarial Network) model has been proposed, which predicts future frames of a video sequence based on a sequence of past frames.
We have validated the effectiveness of the proposed model on the standard datasets.
The experimental outcomes illustrate that the proposed model has a Peak signal-to-noise ratio (PSNR) of 34.94 and a Structural Similarity Index (SSIM) of 0.95.
- Score: 1.1688030627514534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A digital video is a collection of individual frames, while streaming the
video the scene utilized the time slice for each frame. High refresh rate and
high frame rate is the demand of all high technology applications. The action
tracking in videos becomes easier and motion becomes smoother in gaming
applications due to the high refresh rate. It provides a faster response
because of less time in between each frame that is displayed on the screen.
FREGAN (Frame Rate Enhancement Generative Adversarial Network) model has been
proposed, which predicts future frames of a video sequence based on a sequence
of past frames. In this paper, we investigated the GAN model and proposed
FREGAN for the enhancement of frame rate in videos. We have utilized Huber loss
as a loss function in the proposed FREGAN. It provided excellent results in
super-resolution and we have tried to reciprocate that performance in the
application of frame rate enhancement. We have validated the effectiveness of
the proposed model on the standard datasets (UCF101 and RFree500). The
experimental outcomes illustrate that the proposed model has a Peak
signal-to-noise ratio (PSNR) of 34.94 and a Structural Similarity Index (SSIM)
of 0.95.
Related papers
- Making Video Quality Assessment Models Sensitive to Frame Rate
Distortions [63.749184706461826]
We consider the problem of capturing distortions arising from changes in frame rate as part of Video Quality Assessment (VQA)
We propose a simple fusion framework, whereby temporal features from GREED are combined with existing VQA models.
Our results suggest that employing efficient temporal representations can result much more robust and accurate VQA models.
arXiv Detail & Related papers (2022-05-21T04:13:57Z) - OCSampler: Compressing Videos to One Clip with Single-step Sampling [82.0417131211353]
We propose a framework named OCSampler to explore a compact yet effective video representation with one short clip.
Our basic motivation is that the efficient video recognition task lies in processing a whole sequence at once rather than picking up frames sequentially.
arXiv Detail & Related papers (2022-01-12T09:50:38Z) - High Frame Rate Video Quality Assessment using VMAF and Entropic
Differences [50.265638572116984]
The popularity of streaming videos with live, high-action content has led to an increased interest in High Frame Rate (HFR) videos.
In this work we address the problem of frame rate dependent Video Quality Assessment (VQA) when the videos to be compared have different frame rate and compression factor.
We show through various experiments that the proposed fusion framework results in more efficient features for predicting frame rate dependent video quality.
arXiv Detail & Related papers (2021-09-27T04:08:12Z) - Prediction-assistant Frame Super-Resolution for Video Streaming [40.60863957681011]
We propose to enhance video quality using lossy frames in two situations.
For the first case, we propose a small yet effective video frame prediction network.
For the second case, we improve the video prediction network to associate current frames as well as previous frames to restore high-quality images.
arXiv Detail & Related papers (2021-03-17T06:05:27Z) - ST-GREED: Space-Time Generalized Entropic Differences for Frame Rate
Dependent Video Quality Prediction [63.749184706461826]
We study how perceptual quality is affected by frame rate, and how frame rate and compression combine to affect perceived quality.
We devise an objective VQA model called Space-Time GeneRalized Entropic Difference (GREED) which analyzes the statistics of spatial and temporal band-pass video coefficients.
GREED achieves state-of-the-art performance on the LIVE-YT-HFR Database when compared with existing VQA models.
arXiv Detail & Related papers (2020-10-26T16:54:33Z) - All at Once: Temporally Adaptive Multi-Frame Interpolation with Advanced
Motion Modeling [52.425236515695914]
State-of-the-art methods are iterative solutions interpolating one frame at the time.
This work introduces a true multi-frame interpolator.
It utilizes a pyramidal style network in the temporal domain to complete the multi-frame task in one-shot.
arXiv Detail & Related papers (2020-07-23T02:34:39Z) - Capturing Video Frame Rate Variations via Entropic Differencing [63.749184706461826]
We propose a novel statistical entropic differencing method based on a Generalized Gaussian Distribution model.
Our proposed model correlates very well with subjective scores in the recently proposed LIVE-YT-HFR database.
arXiv Detail & Related papers (2020-06-19T22:16:52Z) - Deep Slow Motion Video Reconstruction with Hybrid Imaging System [12.340049542098148]
Current techniques increase the frame rate of standard videos through frame by assuming linear object motion which is not valid in challenging cases.
We propose a two-stage deep learning system consisting of alignment and appearance estimation.
We train our model on synthetically generated hybrid videos and show high-quality results on a variety of test scenes.
arXiv Detail & Related papers (2020-02-27T14:18:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.