GPU-accelerated SIFT-aided source identification of stabilized videos
- URL: http://arxiv.org/abs/2207.14507v1
- Date: Fri, 29 Jul 2022 07:01:31 GMT
- Title: GPU-accelerated SIFT-aided source identification of stabilized videos
- Authors: Andrea Montibeller, Cecilia Pasquini, Giulia Boato, Stefano Dell'Anna,
Fernando P\'erez-Gonz\'alez
- Abstract summary: We exploit the parallelization capabilities of Graphics Processing Units (GPUs) in the framework of stabilised frames inversion.
We propose to exploit SIFT features.
to estimate the camera momentum and %to identify less stabilized temporal segments.
Experiments confirm the effectiveness of the proposed approach in reducing the required computational time and improving the source identification accuracy.
- Score: 63.084540168532065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video stabilization is an in-camera processing commonly applied by modern
acquisition devices. While significantly improving the visual quality of the
resulting videos, it has been shown that such operation typically hinders the
forensic analysis of video signals. In fact, the correct identification of the
acquisition source usually based on Photo Response non-Uniformity (PRNU) is
subject to the estimation of the transformation applied to each frame in the
stabilization phase. A number of techniques have been proposed for dealing with
this problem, which however typically suffer from a high computational burden
due to the grid search in the space of inversion parameters. Our work attempts
to alleviate these shortcomings by exploiting the parallelization capabilities
of Graphics Processing Units (GPUs), typically used for deep learning
applications, in the framework of stabilised frames inversion. Moreover, we
propose to exploit SIFT features {to estimate the camera momentum and} %to
identify less stabilized temporal segments, thus enabling a more accurate
identification analysis, and to efficiently initialize the frame-wise parameter
search of consecutive frames. Experiments on a consolidated benchmark dataset
confirm the effectiveness of the proposed approach in reducing the required
computational time and improving the source identification accuracy. {The code
is available at \url{https://github.com/AMontiB/GPU-PRNU-SIFT}}.
Related papers
- STAC: Leveraging Spatio-Temporal Data Associations For Efficient
Cross-Camera Streaming and Analytics [0.0]
We propose an efficient cross-cameras surveillance system that provides real-time analytics and inference under constrained network environments.
We integrate STAC with frame filtering and state-of-the-art compression for streaming characteristics.
We evaluate the performance of STA using this dataset to measure the accuracy metrics and inference rate for completenessid.
arXiv Detail & Related papers (2024-01-27T04:02:52Z) - RIGID: Recurrent GAN Inversion and Editing of Real Face Videos [73.97520691413006]
GAN inversion is indispensable for applying the powerful editability of GAN to real images.
Existing methods invert video frames individually often leading to undesired inconsistent results over time.
We propose a unified recurrent framework, named textbfRecurrent vtextbfIdeo textbfGAN textbfInversion and etextbfDiting (RIGID)
Our framework learns the inherent coherence between input frames in an end-to-end manner.
arXiv Detail & Related papers (2023-08-11T12:17:24Z) - Fast Full-frame Video Stabilization with Iterative Optimization [21.962533235492625]
We propose an iterative optimization-based learning approach using synthetic datasets for video stabilization.
We develop a two-level (coarse-to-fine) stabilizing algorithm based on the probabilistic flow field.
We take a divide-and-conquer approach and propose a novel multiframe fusion strategy to render full-frame stabilized views.
arXiv Detail & Related papers (2023-07-24T13:24:19Z) - Towards Interpretable Video Super-Resolution via Alternating
Optimization [115.85296325037565]
We study a practical space-time video super-resolution (STVSR) problem which aims at generating a high-framerate high-resolution sharp video from a low-framerate blurry video.
We propose an interpretable STVSR framework by leveraging both model-based and learning-based methods.
arXiv Detail & Related papers (2022-07-21T21:34:05Z) - rSVDdpd: A Robust Scalable Video Surveillance Background Modelling
Algorithm [13.535770763481905]
We present a new video surveillance background modelling algorithm based on a new robust singular value decomposition technique rSVDdpd.
We also demonstrate the superiority of our proposed algorithm on a benchmark dataset and a new real-life video surveillance dataset in the presence of camera tampering.
arXiv Detail & Related papers (2021-09-22T12:20:44Z) - Neural Re-rendering for Full-frame Video Stabilization [144.9918806873405]
We present an algorithm for full-frame video stabilization by first estimating dense warp fields.
Full-frame stabilized frames can then be synthesized by fusing warped contents from neighboring frames.
arXiv Detail & Related papers (2021-02-11T18:59:45Z) - Intrinsic Temporal Regularization for High-resolution Human Video
Synthesis [59.54483950973432]
temporal consistency is crucial for extending image processing pipelines to the video domain.
We propose an effective intrinsic temporal regularization scheme, where an intrinsic confidence map is estimated via the frame generator to regulate motion estimation.
We apply our intrinsic temporal regulation to single-image generator, leading to a powerful " INTERnet" capable of generating $512times512$ resolution human action videos.
arXiv Detail & Related papers (2020-12-11T05:29:45Z) - A Backbone Replaceable Fine-tuning Framework for Stable Face Alignment [21.696696531924374]
We propose a Jitter loss function that leverages temporal information to suppress inaccurate as well as jittered landmarks.
The proposed framework achieves at least 40% improvement on stability evaluation metrics.
It can swiftly convert a landmark detector for facial images to a better-performing one for videos without retraining the entire model.
arXiv Detail & Related papers (2020-10-19T13:40:39Z) - A Modified Fourier-Mellin Approach for Source Device Identification on
Stabilized Videos [72.40789387139063]
multimedia forensic tools usually exploit characteristic noise traces left by the camera sensor on the acquired frames.
This analysis requires that the noise pattern characterizing the camera and the noise pattern extracted from video frames under analysis are geometrically aligned.
We propose to overcome this limitation by searching scaling and rotation parameters in the frequency domain.
arXiv Detail & Related papers (2020-05-20T12:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.