Large-capacity and Flexible Video Steganography via Invertible Neural
Network
- URL: http://arxiv.org/abs/2304.12300v1
- Date: Mon, 24 Apr 2023 17:51:35 GMT
- Title: Large-capacity and Flexible Video Steganography via Invertible Neural
Network
- Authors: Chong Mou, Youmin Xu, Jiechong Song, Chen Zhao, Bernard Ghanem, Jian
Zhang
- Abstract summary: We propose a Large-capacity and Flexible Video Steganography Network (LF-VSN)
For large-capacity, we present a reversible pipeline to perform multiple videos hiding and recovering through a single invertible neural network (INN)
For flexibility, we propose a key-controllable scheme, enabling different receivers to recover particular secret videos from the same cover video through specific keys.
- Score: 60.34588692333379
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video steganography is the art of unobtrusively concealing secret data in a
cover video and then recovering the secret data through a decoding protocol at
the receiver end. Although several attempts have been made, most of them are
limited to low-capacity and fixed steganography. To rectify these weaknesses,
we propose a Large-capacity and Flexible Video Steganography Network (LF-VSN)
in this paper. For large-capacity, we present a reversible pipeline to perform
multiple videos hiding and recovering through a single invertible neural
network (INN). Our method can hide/recover 7 secret videos in/from 1 cover
video with promising performance. For flexibility, we propose a
key-controllable scheme, enabling different receivers to recover particular
secret videos from the same cover video through specific keys. Moreover, we
further improve the flexibility by proposing a scalable strategy in multiple
videos hiding, which can hide variable numbers of secret videos in a cover
video with a single model and a single training session. Extensive experiments
demonstrate that with the significant improvement of the video steganography
performance, our proposed LF-VSN has high security, large hiding capacity, and
flexibility. The source code is available at https://github.com/MC-E/LF-VSN.
Related papers
- PV-VTT: A Privacy-Centric Dataset for Mission-Specific Anomaly Detection and Natural Language Interpretation [5.0923114224599555]
We present PV-VTT (Privacy Violation Video To Text), a unique multimodal dataset aimed at identifying privacy violations.
PV-VTT provides detailed annotations for both video and text in scenarios.
This privacy-focused approach allows researchers to use the dataset while protecting participant confidentiality.
arXiv Detail & Related papers (2024-10-30T01:02:20Z) - From Covert Hiding to Visual Editing: Robust Generative Video
Steganography [34.99965076701196]
We propose an innovative approach that embeds secret message within semantic feature for steganography during the video editing process.
In this paper, we introduce an end-to-end robust generative video steganography network (RoGVS), which achieves visual editing by modifying semantic feature of videos to embed secret message.
arXiv Detail & Related papers (2024-01-01T03:40:07Z) - Video Infringement Detection via Feature Disentanglement and Mutual
Information Maximization [51.206398602941405]
We propose to disentangle an original high-dimensional feature into multiple sub-features.
On top of the disentangled sub-features, we learn an auxiliary feature to enhance the sub-features.
Our method achieves 90.1% TOP-100 mAP on the large-scale SVD dataset and also sets the new state-of-the-art on the VCSL benchmark dataset.
arXiv Detail & Related papers (2023-09-13T10:53:12Z) - Towards Scalable Neural Representation for Diverse Videos [68.73612099741956]
Implicit neural representations (INR) have gained increasing attention in representing 3D scenes and images.
Existing INR-based methods are limited to encoding a handful of short videos with redundant visual content.
This paper focuses on developing neural representations for encoding long and/or a large number of videos with diverse visual content.
arXiv Detail & Related papers (2023-03-24T16:32:19Z) - Contrastive Masked Autoencoders for Self-Supervised Video Hashing [54.636976693527636]
Self-Supervised Video Hashing (SSVH) models learn to generate short binary representations for videos without ground-truth supervision.
We propose a simple yet effective one-stage SSVH method called ConMH, which incorporates video semantic information and video similarity relationship understanding.
arXiv Detail & Related papers (2022-11-21T06:48:14Z) - Text-Driven Video Acceleration: A Weakly-Supervised Reinforcement
Learning Method [6.172652648945223]
This paper presents a novel weakly-supervised methodology to accelerate instructional videos using text.
A novel joint reward function guides our agent to select which frames to remove and reduce the input video to a target length.
We also propose the Extended Visually-guided Document Attention Network (VDAN+), which can generate a highly discriminative embedding space.
arXiv Detail & Related papers (2022-03-29T17:43:01Z) - Deep Video Prior for Video Consistency and Propagation [58.250209011891904]
We present a novel and general approach for blind video temporal consistency.
Our method is only trained on a pair of original and processed videos directly instead of a large dataset.
We show that temporal consistency can be achieved by training a convolutional neural network on a video with Deep Video Prior.
arXiv Detail & Related papers (2022-01-27T16:38:52Z) - Less is More: ClipBERT for Video-and-Language Learning via Sparse
Sampling [98.41300980759577]
A canonical approach to video-and-language learning dictates a neural model to learn from offline-extracted dense video features.
We propose a generic framework ClipBERT that enables affordable end-to-end learning for video-and-language tasks.
Experiments on text-to-video retrieval and video question answering on six datasets demonstrate that ClipBERT outperforms existing methods.
arXiv Detail & Related papers (2021-02-11T18:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.