A Detection Method of Temporally Operated Videos Using Robust Hashing
- URL: http://arxiv.org/abs/2208.05198v2
- Date: Thu, 11 Aug 2022 14:42:57 GMT
- Title: A Detection Method of Temporally Operated Videos Using Robust Hashing
- Authors: Shoko Niwa, Miki Tanaka, Hitoshi Kiya
- Abstract summary: Most conventional methods for detecting tampered videos/images are not robust enough against such operations.
We propose a novel method with a robust hashing algorithm for detecting temporally operated videos even when applying resizing and compression to the videos.
- Score: 12.27887776401573
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: SNS providers are known to carry out the recompression and resizing of
uploaded videos/images, but most conventional methods for detecting tampered
videos/images are not robust enough against such operations. In addition,
videos are temporally operated such as the insertion of new frames and the
permutation of frames, of which operations are difficult to be detected by
using conventional methods. Accordingly, in this paper, we propose a novel
method with a robust hashing algorithm for detecting temporally operated videos
even when applying resizing and compression to the videos.
Related papers
- Digital Video Manipulation Detection Technique Based on Compression Algorithms [8.345872075633498]
This paper proposes a forensic technique by analysing compression algorithms used by the H.264 coding.
A Vector Support Machine is used to create the model that allows to accurately detect if a video has been recompressed.
arXiv Detail & Related papers (2024-02-03T16:05:27Z) - Accelerated Event-Based Feature Detection and Compression for
Surveillance Video Systems [1.5390526524075634]
We propose a novel system which conveys temporal redundancy within a sparse decompressed representation.
We leverage a video representation framework called ADDER to transcode framed videos to sparse, asynchronous intensity samples.
Our work paves the way for upcoming neuromorphic sensors and is amenable to future applications with spiking neural networks.
arXiv Detail & Related papers (2023-12-13T15:30:29Z) - Blurry Video Compression: A Trade-off between Visual Enhancement and
Data Compression [65.8148169700705]
Existing video compression (VC) methods primarily aim to reduce the spatial and temporal redundancies between consecutive frames in a video.
Previous works have achieved remarkable results on videos acquired under specific settings such as instant (known) exposure time and shutter speed.
In this work, we tackle the VC problem in a general scenario where a given video can be blurry due to predefined camera settings or dynamics in the scene.
arXiv Detail & Related papers (2023-11-08T02:17:54Z) - Aggregating Long-term Sharp Features via Hybrid Transformers for Video
Deblurring [76.54162653678871]
We propose a video deblurring method that leverages both neighboring frames and present sharp frames using hybrid Transformers for feature aggregation.
Our proposed method outperforms state-of-the-art video deblurring methods as well as event-driven video deblurring methods in terms of quantitative metrics and visual quality.
arXiv Detail & Related papers (2023-09-13T16:12:11Z) - Speeding Up Action Recognition Using Dynamic Accumulation of Residuals
in Compressed Domain [2.062593640149623]
Temporal redundancy and the sheer size of raw videos are the two most common problematic issues related to video processing algorithms.
This paper presents an approach for using residual data, available in compressed videos directly, which can be obtained by a light partially decoding procedure.
Applying neural networks exclusively for accumulated residuals in the compressed domain accelerates performance, while the classification results are highly competitive with raw video approaches.
arXiv Detail & Related papers (2022-09-29T13:08:49Z) - GPU-accelerated SIFT-aided source identification of stabilized videos [63.084540168532065]
We exploit the parallelization capabilities of Graphics Processing Units (GPUs) in the framework of stabilised frames inversion.
We propose to exploit SIFT features.
to estimate the camera momentum and %to identify less stabilized temporal segments.
Experiments confirm the effectiveness of the proposed approach in reducing the required computational time and improving the source identification accuracy.
arXiv Detail & Related papers (2022-07-29T07:01:31Z) - Temporal Early Exits for Efficient Video Object Detection [1.1470070927586016]
We propose temporal early exits to reduce the computational complexity of per-frame video object detection.
Our method significantly reduces the computational complexity and execution of per-frame video object detection up to $34 times$ compared to existing methods.
arXiv Detail & Related papers (2021-06-21T15:49:46Z) - Semi-Supervised Action Recognition with Temporal Contrastive Learning [50.08957096801457]
We learn a two-pathway temporal contrastive model using unlabeled videos at two different speeds.
We considerably outperform video extensions of sophisticated state-of-the-art semi-supervised image recognition methods.
arXiv Detail & Related papers (2021-02-04T17:28:35Z) - Robust Unsupervised Video Anomaly Detection by Multi-Path Frame
Prediction [61.17654438176999]
We propose a novel and robust unsupervised video anomaly detection method by frame prediction with proper design.
Our proposed method obtains the frame-level AUROC score of 88.3% on the CUHK Avenue dataset.
arXiv Detail & Related papers (2020-11-05T11:34:12Z) - Robust and efficient post-processing for video object detection [9.669942356088377]
This work introduces a novel post-processing pipeline that overcomes some of the limitations of previous post-processing methods.
Our method improves the results of state-of-the-art specific video detectors, specially regarding fast moving objects.
And applied to efficient still image detectors, such as YOLO, provides comparable results to much more computationally intensive detectors.
arXiv Detail & Related papers (2020-09-23T10:47:24Z) - A Modified Fourier-Mellin Approach for Source Device Identification on
Stabilized Videos [72.40789387139063]
multimedia forensic tools usually exploit characteristic noise traces left by the camera sensor on the acquired frames.
This analysis requires that the noise pattern characterizing the camera and the noise pattern extracted from video frames under analysis are geometrically aligned.
We propose to overcome this limitation by searching scaling and rotation parameters in the frequency domain.
arXiv Detail & Related papers (2020-05-20T12:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.