Restoration of Analog Videos Using Swin-UNet
- URL: http://arxiv.org/abs/2311.04261v1
- Date: Tue, 7 Nov 2023 16:00:31 GMT
- Title: Restoration of Analog Videos Using Swin-UNet
- Authors: Lorenzo Agnolucci, Leonardo Galteri, Marco Bertini, Alberto Del Bimbo
- Abstract summary: We present a system to restore analog videos of historical archives.
The proposed system uses a multi-frame approach and is able to deal with severe tape mistracking.
- Score: 28.773037051085318
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we present a system to restore analog videos of historical
archives. These videos often contain severe visual degradation due to the
deterioration of their tape supports that require costly and slow manual
interventions to recover the original content. The proposed system uses a
multi-frame approach and is able to deal with severe tape mistracking, which
results in completely scrambled frames. Tests on real-world videos from a major
historical video archive show the effectiveness of our demo system. The code
and the pre-trained model are publicly available at
https://github.com/miccunifi/analog-video-restoration.
Related papers
- Learning Truncated Causal History Model for Video Restoration [14.381907888022615]
TURTLE learns the truncated causal history model for efficient and high-performing video restoration.
We report new state-of-the-art results on a multitude of video restoration benchmark tasks.
arXiv Detail & Related papers (2024-10-04T21:31:02Z) - Reference-based Restoration of Digitized Analog Videotapes [28.773037051085318]
We present a reference-based approach for the resToration of digitized Analog videotaPEs (TAPE)
We leverage CLIP for zero-shot artifact detection to identify the cleanest frames of each video through textual prompts describing different artifacts.
To address the absence of ground truth in real-world videos, we create a synthetic dataset of videos exhibiting artifacts that closely resemble those commonly found in analog videotapes.
arXiv Detail & Related papers (2023-10-20T17:33:57Z) - Dual-Stream Knowledge-Preserving Hashing for Unsupervised Video
Retrieval [67.52910255064762]
We design a simple dual-stream structure, including a temporal layer and a hash layer.
We first design a simple dual-stream structure, including a temporal layer and a hash layer.
With the help of semantic similarity knowledge obtained from self-supervision, the hash layer learns to capture information for semantic retrieval.
In this way, the model naturally preserves the disentangled semantics into binary codes.
arXiv Detail & Related papers (2023-10-12T03:21:12Z) - Video Event Restoration Based on Keyframes for Video Anomaly Detection [9.18057851239942]
Existing deep neural network based anomaly detection (VAD) methods mostly follow the route of frame reconstruction or frame prediction.
We introduce a brand-new VAD paradigm to break through these limitations.
We propose a novel U-shaped Swin Transformer Network with Dual Skip Connections (USTN-DSC) for video event restoration.
arXiv Detail & Related papers (2023-04-11T10:13:19Z) - Speeding Up Action Recognition Using Dynamic Accumulation of Residuals
in Compressed Domain [2.062593640149623]
Temporal redundancy and the sheer size of raw videos are the two most common problematic issues related to video processing algorithms.
This paper presents an approach for using residual data, available in compressed videos directly, which can be obtained by a light partially decoding procedure.
Applying neural networks exclusively for accumulated residuals in the compressed domain accelerates performance, while the classification results are highly competitive with raw video approaches.
arXiv Detail & Related papers (2022-09-29T13:08:49Z) - Restoration of User Videos Shared on Social Media [27.16457737969977]
User videos shared on social media platforms usually suffer from degradations caused by unknown proprietary processing procedures.
This paper presents a new general video restoration framework for the restoration of user videos shared on social media platforms.
In contrast to most deep learning-based video restoration methods that perform end-to-end mapping, our new method, Video restOration through adapTive dEgradation Sensing (VOTES), introduces the concept of a degradation feature map (DFM) to explicitly guide the video restoration process.
arXiv Detail & Related papers (2022-08-18T02:28:43Z) - Learning Trajectory-Aware Transformer for Video Super-Resolution [50.49396123016185]
Video super-resolution aims to restore a sequence of high-resolution (HR) frames from their low-resolution (LR) counterparts.
Existing approaches usually align and aggregate video frames from limited adjacent frames.
We propose a novel Transformer for Video Super-Resolution (TTVSR)
arXiv Detail & Related papers (2022-04-08T03:37:39Z) - Video Demoireing with Relation-Based Temporal Consistency [68.20281109859998]
Moire patterns, appearing as color distortions, severely degrade image and video qualities when filming a screen with digital cameras.
We study how to remove such undesirable moire patterns in videos, namely video demoireing.
arXiv Detail & Related papers (2022-04-06T17:45:38Z) - Transcoded Video Restoration by Temporal Spatial Auxiliary Network [64.63157339057912]
We propose a new method, temporal spatial auxiliary network (TSAN), for transcoded video restoration.
The experimental results demonstrate that the performance of the proposed method is superior to that of the previous techniques.
arXiv Detail & Related papers (2021-12-15T08:10:23Z) - VPN: Video Provenance Network for Robust Content Attribution [72.12494245048504]
We present VPN - a content attribution method for recovering provenance information from videos shared online.
We learn a robust search embedding for matching such video, using full-length or truncated video queries.
Once matched against a trusted database of video clips, associated information on the provenance of the clip is presented to the user.
arXiv Detail & Related papers (2021-09-21T09:07:05Z) - Efficient video integrity analysis through container characterization [77.45740041478743]
We introduce a container-based method to identify the software used to perform a video manipulation.
The proposed method is both efficient and effective and can also provide a simple explanation for its decisions.
It achieves an accuracy of 97.6% in distinguishing pristine from tampered videos and classifying the editing software.
arXiv Detail & Related papers (2021-01-26T14:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.