HSTR-Net: High Spatio-Temporal Resolution Video Generation For Wide Area
Surveillance
- URL: http://arxiv.org/abs/2204.04435v1
- Date: Sat, 9 Apr 2022 09:23:58 GMT
- Title: HSTR-Net: High Spatio-Temporal Resolution Video Generation For Wide Area
Surveillance
- Authors: H. Umut Suluhan, Hasan F. Ates, Bahadir K. Gunturk
- Abstract summary: This paper presents the usage of multiple video feeds for the generation of HSTR video.
The main purpose is to create an HSTR video from the fusion of HSLF and LSHF videos.
- Score: 4.125187280299246
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Wide area surveillance has many applications and tracking of objects under
observation is an important task, which often needs high spatio-temporal
resolution (HSTR) video for better precision. This paper presents the usage of
multiple video feeds for the generation of HSTR video as an extension of
reference based super resolution (RefSR). One feed captures video at high
spatial resolution with low frame rate (HSLF) while the other captures low
spatial resolution and high frame rate (LSHF) video simultaneously for the same
scene. The main purpose is to create an HSTR video from the fusion of HSLF and
LSHF videos. In this paper we propose an end-to-end trainable deep network that
performs optical flow estimation and frame reconstruction by combining inputs
from both video feeds. The proposed architecture provides significant
improvement over existing video frame interpolation and RefSR techniques in
terms of objective PSNR and SSIM metrics.
Related papers
- Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution [151.1255837803585]
We propose a novel approach, pursuing Spatial Adaptation and Temporal Coherence (SATeCo) for video super-resolution.
SATeCo pivots on learning spatial-temporal guidance from low-resolution videos to calibrate both latent-space high-resolution video denoising and pixel-space video reconstruction.
Experiments conducted on the REDS4 and Vid4 datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-03-25T17:59:26Z) - HSTR-Net: Reference Based Video Super-resolution with Dual Cameras [2.4749083496491675]
This paper proposes a dual camera system for the generation of HSTR video using reference-based super-resolution (RefSR)
One camera captures high spatial resolution low frame rate (HSLF) video while the other captures low spatial resolution high frame rate (LSHF) video simultaneously for the same scene.
A novel deep learning architecture is proposed to fuse HSLF and LSHF video feeds and synthesize HSTR video frames.
arXiv Detail & Related papers (2023-10-18T16:37:01Z) - Continuous Space-Time Video Super-Resolution Utilizing Long-Range
Temporal Information [48.20843501171717]
We propose a continuous ST-VSR (CSTVSR) method that can convert the given video to any frame rate and spatial resolution.
We show that the proposed algorithm has good flexibility and achieves better performance on various datasets.
arXiv Detail & Related papers (2023-02-26T08:02:39Z) - VideoINR: Learning Video Implicit Neural Representation for Continuous
Space-Time Super-Resolution [75.79379734567604]
We show that Video Implicit Neural Representation (VideoINR) can be decoded to videos of arbitrary spatial resolution and frame rate.
We show that VideoINR achieves competitive performances with state-of-the-art STVSR methods on common up-sampling scales.
arXiv Detail & Related papers (2022-06-09T17:45:49Z) - STRPM: A Spatiotemporal Residual Predictive Model for High-Resolution
Video Prediction [78.129039340528]
We propose a StemporalResidual Predictive Model (STRPM) for high-resolution video prediction.
STRPM can generate more satisfactory results compared with various existing methods.
Experimental results show that STRPM can generate more satisfactory results compared with various existing methods.
arXiv Detail & Related papers (2022-03-30T06:24:00Z) - STDAN: Deformable Attention Network for Space-Time Video
Super-Resolution [39.18399652834573]
We propose a deformable attention network called STDAN for STVSR.
First, we devise a long-short term feature (LSTFI) module, which is capable of abundant content from more neighboring input frames.
Second, we put forward a spatial-temporal deformable feature aggregation (STDFA) module, in which spatial and temporal contexts are adaptively captured and aggregated.
arXiv Detail & Related papers (2022-03-14T03:40:35Z) - Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video
Super-Resolution [100.11355888909102]
Space-time video super-resolution aims at generating a high-resolution (HR) slow-motion video from a low-resolution (LR) and low frame rate (LFR) video sequence.
We present a one-stage space-time video super-resolution framework, which can directly reconstruct an HR slow-motion video sequence from an input LR and LFR video.
arXiv Detail & Related papers (2021-04-15T17:59:23Z) - Deep Slow Motion Video Reconstruction with Hybrid Imaging System [12.340049542098148]
Current techniques increase the frame rate of standard videos through frame by assuming linear object motion which is not valid in challenging cases.
We propose a two-stage deep learning system consisting of alignment and appearance estimation.
We train our model on synthetically generated hybrid videos and show high-quality results on a variety of test scenes.
arXiv Detail & Related papers (2020-02-27T14:18:12Z) - Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video
Super-Resolution [95.26202278535543]
A simple solution is to split it into two sub-tasks: video frame (VFI) and video super-resolution (VSR)
temporalsynthesis and spatial super-resolution are intra-related in this task.
We propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video.
arXiv Detail & Related papers (2020-02-26T16:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.