Reversible Video Steganography Using Quick Response Codes and Modified ElGamal Cryptosystem
- URL: http://arxiv.org/abs/2508.07289v1
- Date: Sun, 10 Aug 2025 10:56:10 GMT
- Title: Reversible Video Steganography Using Quick Response Codes and Modified ElGamal Cryptosystem
- Authors: Ramadhan J. Mstafa,
- Abstract summary: A novel solution to reversible video steganography based on DWT and QR codes is proposed.<n>The visual imperceptibility, robustness, and embedding capacity of these approaches are all challenges that must be addressed.<n>Aside from visual imperceptibility, the suggested method exceeds current methods in terms of PSNR average of 52.143 dB, and embedding capacity 1 bpp.
- Score: 1.90365714903665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid transmission of multimedia information has been achieved mainly by recent advancements in the Internet's speed and information technology. In spite of this, advancements in technology have resulted in breaches of privacy and data security. When it comes to protecting private information in today's Internet era, digital steganography is vital. Many academics are interested in digital video because it has a great capability for concealing important data. There have been a vast number of video steganography solutions developed lately to guard against the theft of confidential data. The visual imperceptibility, robustness, and embedding capacity of these approaches are all challenges that must be addressed. In this paper, a novel solution to reversible video steganography based on DWT and QR codes is proposed to address these concerns. In order to increase the security level of the suggested method, an enhanced ElGamal cryptosystem has also been proposed. Prior to the embedding stage, the suggested method uses the modified ElGamal algorithm to encrypt secret QR codes. Concurrently, it applies two-dimensional DWT on the Y-component of each video frame resulting in LL, LH, HL, and HH sub-bands. Then, the encrypted Low (L), Medium (M), Quantile (Q), and High (H) QR codes are embedded into the HL sub-band, HH sub-band, U-component, and V-component of video frames, respectively, using the LSB technique. As a consequence of extensive testing of the approach, it was shown to be very secure and highly invisible, as well as highly resistant to attacks from Salt & Pepper, Gaussian, Poisson, and Speckle noises, which has an average SSIM of more than 0.91. Aside from visual imperceptibility, the suggested method exceeds current methods in terms of PSNR average of 52.143 dB, and embedding capacity 1 bpp.
Related papers
- LongVideo-R1: Smart Navigation for Low-cost Long Video Understanding [106.23494088118571]
LongVideo-R1 is a multimodal large language model (MLLM) agent for efficient video context navigation.<n>It infers the most informative video clip for subsequent processing.<n>The LongVideo-R1 agent is fine-tuned upon the Qwen-3-8B model through a two-stage paradigm.
arXiv Detail & Related papers (2026-02-24T13:49:47Z) - Optimizing Region of Interest Selection for Effective Embedding in Video Steganography Based on Genetic Algorithms [1.6114012813668932]
This paper proposes a new method to video steganography, which involves utilizing a Genetic Algorithm (GA) for identifying the Region of Interest (ROI) in the cover video.<n>The secret data is encrypted using the Advanced Encryption Standard (AES), which is a widely accepted encryption standard, before being embedded into the cover video.<n>The proposed method has a high embedding capacity and efficiency, with a PSNR ranging between 64 and 75 dBs, which indicates that the embedded data is almost indistinguishable from the original video.
arXiv Detail & Related papers (2025-08-19T10:16:45Z) - Leveraging Pre-Trained Visual Models for AI-Generated Video Detection [54.88903878778194]
The field of video generation has advanced beyond DeepFakes, creating an urgent need for methods capable of detecting AI-generated videos with generic content.<n>We propose a novel approach that leverages pre-trained visual models to distinguish between real and generated videos.<n>Our method achieves high detection accuracy, above 90% on average, underscoring its effectiveness.
arXiv Detail & Related papers (2025-07-17T15:36:39Z) - Securing Immersive 360 Video Streams through Attribute-Based Selective Encryption [1.6768151308423365]
This paper proposes a novel framework integrating Attribute-Based Encryption (ABE) with selective encryption techniques tailored specifically for tiled 360deg video streaming.<n>Our approach employs selective encryption of frames at varying levels to reduce computational overhead while ensuring robust protection against unauthorized access.<n>We deploy and evaluate our proposed approach using the CloudLab testbed, comparing its performance against traditional HTTPS streaming.
arXiv Detail & Related papers (2025-05-07T14:37:13Z) - Mixing Algorithm for Extending the Tiers of the Unapparent Information Send through the Audio Streams [0.0]
Secrecy and efficiency can be obtained through steganographic involvement.<n>This paper analyzes and proposes a way out according to the performance based on robustness, security, and hiding capacity.
arXiv Detail & Related papers (2025-02-18T05:08:45Z) - Enhancing Long Video Generation Consistency without Tuning [92.1714656167712]
We address issues to enhance the consistency and coherence of videos generated with either single or multiple prompts.<n>We propose the Time-frequency based temporal Attention Reweighting Algorithm (TiARA), which judiciously edits the attention score matrix.<n>For videos generated by multiple prompts, we further uncover key factors such as the alignment of the prompts affecting prompt quality.<n>Inspired by our analyses, we propose PromptBlend, an advanced prompt pipeline that systematically aligns the prompts.
arXiv Detail & Related papers (2024-12-23T03:56:27Z) - Comparative Analysis of AES, Blowfish, Twofish, Salsa20, and ChaCha20 for Image Encryption [0.4711628883579317]
This study delves into the prevalent cryptographic methods and algorithms utilized for prevention and stream encryption.
It examines their encoding techniques such as advanced encryp-tion standard (AES), Blowfish, Twofish, Salsa20, and ChaCha20.
The results showed that ChaCha20 had the best average time for both encryp-tion and decryption, being over 50% faster than some other algorithms.
arXiv Detail & Related papers (2024-07-23T08:26:05Z) - HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression [51.04820313355164]
HyrbidFlow combines the continuous-feature-based and codebook-based streams to achieve both high perceptual quality and high fidelity under extreme lows.
Experimental results demonstrate superior performance across several datasets under extremely lows.
arXiv Detail & Related papers (2024-04-20T13:19:08Z) - Secure Information Embedding in Images with Hybrid Firefly Algorithm [2.9182357325967145]
This research introduces a novel steganographic approach for concealing a confidential portable document format (PDF) document within a host image.
The purpose of this search is to accomplish two main goals: increasing the host image's capacity and reducing distortion.
The findings indicate a decrease in image distortion and an accelerated rate of convergence in the search process.
arXiv Detail & Related papers (2023-12-21T01:50:02Z) - Large-capacity and Flexible Video Steganography via Invertible Neural
Network [60.34588692333379]
We propose a Large-capacity and Flexible Video Steganography Network (LF-VSN)
For large-capacity, we present a reversible pipeline to perform multiple videos hiding and recovering through a single invertible neural network (INN)
For flexibility, we propose a key-controllable scheme, enabling different receivers to recover particular secret videos from the same cover video through specific keys.
arXiv Detail & Related papers (2023-04-24T17:51:35Z) - Contrastive Masked Autoencoders for Self-Supervised Video Hashing [54.636976693527636]
Self-Supervised Video Hashing (SSVH) models learn to generate short binary representations for videos without ground-truth supervision.
We propose a simple yet effective one-stage SSVH method called ConMH, which incorporates video semantic information and video similarity relationship understanding.
arXiv Detail & Related papers (2022-11-21T06:48:14Z) - Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval [55.088635195893325]
We propose the first quantized representation learning method for cross-view video retrieval, namely Hybrid Contrastive Quantization (HCQ)
HCQ learns both coarse-grained and fine-grained quantizations with transformers, which provide complementary understandings for texts and videos.
Experiments on three Web video benchmark datasets demonstrate that HCQ achieves competitive performance with state-of-the-art non-compressed retrieval methods.
arXiv Detail & Related papers (2022-02-07T18:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.