Towards Robust Image-in-Audio Deep Steganography
- URL: http://arxiv.org/abs/2303.05007v1
- Date: Thu, 9 Mar 2023 03:16:04 GMT
- Title: Towards Robust Image-in-Audio Deep Steganography
- Authors: Jaume Ros Alonso, Margarita Geleta, Jordi Pons, Xavier Giro-i-Nieto
- Abstract summary: This paper extends and enhances an existing image-in-audio deep steganography method by focusing on improving its robustness.
The proposed enhancements include modifications to the loss function, utilization of the Short-Time Fourier Transform (STFT), introduction of redundancy in the encoding process for error correction, and buffering of additional information in the pixel subconvolution operation.
- Score: 14.1081872409308
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The field of steganography has experienced a surge of interest due to the
recent advancements in AI-powered techniques, particularly in the context of
multimodal setups that enable the concealment of signals within signals of a
different nature. The primary objectives of all steganographic methods are to
achieve perceptual transparency, robustness, and large embedding capacity -
which often present conflicting goals that classical methods have struggled to
reconcile. This paper extends and enhances an existing image-in-audio deep
steganography method by focusing on improving its robustness. The proposed
enhancements include modifications to the loss function, utilization of the
Short-Time Fourier Transform (STFT), introduction of redundancy in the encoding
process for error correction, and buffering of additional information in the
pixel subconvolution operation. The results demonstrate that our approach
outperforms the existing method in terms of robustness and perceptual
transparency.
Related papers
- Robust Network Learning via Inverse Scale Variational Sparsification [55.64935887249435]
We introduce an inverse scale variational sparsification framework within a time-continuous inverse scale space formulation.
Unlike frequency-based methods, our approach not only removes noise by smoothing small-scale features.
We show the efficacy of our approach through enhanced robustness against various noise types.
arXiv Detail & Related papers (2024-09-27T03:17:35Z) - Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration [64.84134880709625]
We show that it is possible to perform domain adaptation via the noise space using diffusion models.
In particular, by leveraging the unique property of how auxiliary conditional inputs influence the multi-step denoising process, we derive a meaningful diffusion loss.
We present crucial strategies such as channel-shuffling layer and residual-swapping contrastive learning in the diffusion model.
arXiv Detail & Related papers (2024-06-26T17:40:30Z) - DA-HFNet: Progressive Fine-Grained Forgery Image Detection and Localization Based on Dual Attention [12.36906630199689]
We construct a DA-HFNet forged image dataset guided by text or image-assisted GAN and Diffusion model.
Our goal is to utilize a hierarchical progressive network to capture forged artifacts at different scales for detection and localization.
arXiv Detail & Related papers (2024-06-03T16:13:33Z) - Inhomogeneous illumination image enhancement under ex-tremely low visibility condition [3.534798835599242]
Imaging through dense fog presents unique challenges, with essential visual information crucial for applications like object detection and recognition obscured, thereby hindering conventional image processing methods.
We introduce in this paper a novel method that adaptively filters background illumination based on Structural Differential and Integral Filtering (F) to enhance only vital signal information.
Our findings demonstrate that our proposed method significantly enhances signal clarity under extremely low visibility conditions and out-performs existing techniques, offering substantial improvements for deep fog imaging applications.
arXiv Detail & Related papers (2024-04-26T16:09:42Z) - Misalignment-Robust Frequency Distribution Loss for Image Transformation [51.0462138717502]
This paper aims to address a common challenge in deep learning-based image transformation methods, such as image enhancement and super-resolution.
We introduce a novel and simple Frequency Distribution Loss (FDL) for computing distribution distance within the frequency domain.
Our method is empirically proven effective as a training constraint due to the thoughtful utilization of global information in the frequency domain.
arXiv Detail & Related papers (2024-02-28T09:27:41Z) - Transforming gradient-based techniques into interpretable methods [3.6763102409647526]
We introduce GAD (Gradient Artificial Distancing) as a supportive framework for gradient-based techniques.
Its primary objective is to accentuate influential regions by establishing distinctions between classes.
Empirical investigations involving occluded images have demonstrated that the identified regions through this methodology indeed play a pivotal role in facilitating class differentiation.
arXiv Detail & Related papers (2024-01-25T09:24:19Z) - Low-light Image Enhancement via CLIP-Fourier Guided Wavelet Diffusion [28.049668999586583]
We propose a novel and robust low-light image enhancement method via CLIP-Fourier Guided Wavelet Diffusion, abbreviated as CFWD.
CFWD leverages multimodal visual-language information in the frequency domain space created by multiple wavelet transforms to guide the enhancement process.
Our approach outperforms existing state-of-the-art methods, achieving significant progress in image quality and noise suppression.
arXiv Detail & Related papers (2024-01-08T10:08:48Z) - Stable Messenger: Steganography for Message-Concealed Image Generation [6.310429296631073]
We introduce message accuracy'', a novel metric evaluating the entirety of decoded messages for a more holistic evaluation.
We propose an adaptive universal loss tailored to enhance message accuracy, named Log-Sum-Exponential (LSE) loss.
We also introduce a new latent-aware encoding technique in our framework named Approach, harnessing pretrained Stable Diffusion for advanced steganographic image generation.
arXiv Detail & Related papers (2023-12-03T05:02:43Z) - Global Structure-Aware Diffusion Process for Low-Light Image Enhancement [64.69154776202694]
This paper studies a diffusion-based framework to address the low-light image enhancement problem.
We advocate for the regularization of its inherent ODE-trajectory.
Experimental evaluations reveal that the proposed framework attains distinguished performance in low-light enhancement.
arXiv Detail & Related papers (2023-10-26T17:01:52Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.