Learning Iterative Neural Optimizers for Image Steganography
- URL: http://arxiv.org/abs/2303.16206v1
- Date: Mon, 27 Mar 2023 19:17:07 GMT
- Title: Learning Iterative Neural Optimizers for Image Steganography
- Authors: Xiangyu Chen, Varsha Kishore, Kilian Q Weinberger
- Abstract summary: In this paper, we argue that image steganography is inherently performed on the (elusive) manifold of natural images.
We train the neural network to stay close to the manifold of natural images throughout the optimization.
In comparison to previous state-of-the-art encoder-decoder-based steganography methods, it reduces the recovery error rate by multiple orders of magnitude.
- Score: 29.009110889917856
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Image steganography is the process of concealing secret information in images
through imperceptible changes. Recent work has formulated this task as a
classic constrained optimization problem. In this paper, we argue that image
steganography is inherently performed on the (elusive) manifold of natural
images, and propose an iterative neural network trained to perform the
optimization steps. In contrast to classical optimization methods like L-BFGS
or projected gradient descent, we train the neural network to also stay close
to the manifold of natural images throughout the optimization. We show that our
learned neural optimization is faster and more reliable than classical
optimization approaches. In comparison to previous state-of-the-art
encoder-decoder-based steganography methods, it reduces the recovery error rate
by multiple orders of magnitude and achieves zero error up to 3 bits per pixel
(bpp) without the need for error-correcting codes.
Related papers
- Self-Supervised Single-Image Deconvolution with Siamese Neural Networks [6.138671548064356]
Inverse problems in image reconstruction are fundamentally complicated by unknown noise properties.
Deep learning methods allow for flexible parametrization of the noise and learning its properties directly from the data.
We tackle this problem with Fast Fourier Transform convolutions that provide training speed-up in 3D deconvolution tasks.
arXiv Detail & Related papers (2023-08-18T09:51:11Z) - Learning Large-scale Neural Fields via Context Pruned Meta-Learning [60.93679437452872]
We introduce an efficient optimization-based meta-learning technique for large-scale neural field training.
We show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields.
Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals.
arXiv Detail & Related papers (2023-02-01T17:32:16Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - Learning to Optimize Quasi-Newton Methods [22.504971951262004]
This paper introduces a novel machine learning called LODO, which tries to online meta-learn the best preconditioner during optimization.
Unlike other L2O methods, LODO does not require any meta-training on a training task distribution.
We show that our gradient approximates the inverse Hessian in noisy loss landscapes and is capable of representing a wide range of inverse Hessians.
arXiv Detail & Related papers (2022-10-11T03:47:14Z) - Blind Image Deconvolution Using Variational Deep Image Prior [4.92175281564179]
This paper proposes a new variational deep image prior (VDIP) for blind image deconvolution.
VDIP exploits additive hand-crafted image priors on latent sharp images and approximates a distribution for each pixel to avoid suboptimal solutions.
Experiments show that the generated images have better quality than that of the original DIP on benchmark datasets.
arXiv Detail & Related papers (2022-02-01T01:33:58Z) - Multi-scale Neural ODEs for 3D Medical Image Registration [7.715565365558909]
Image registration plays an important role in medical image analysis.
Deep learning methods such as learn-to-map are much faster but either iterative or coarse-to-fine approach is required to improve accuracy for handling large motions.
In this work, we proposed to learn a registration via a multi-scale neural ODE model.
arXiv Detail & Related papers (2021-06-16T00:26:53Z) - DRO: Deep Recurrent Optimizer for Structure-from-Motion [46.34708595941016]
This paper presents a novel optimization method based on recurrent neural networks in structure-from-motion (SfM)
Our neural alternatively updates the depth and camera poses through iterations to minimize a feature-metric cost.
Experiments demonstrate that our recurrent computation effectively reduces the feature-metric cost while refining the depth and poses.
arXiv Detail & Related papers (2021-03-24T13:59:40Z) - Human Body Model Fitting by Learned Gradient Descent [48.79414884222403]
We propose a novel algorithm for the fitting of 3D human shape to images.
We show that this algorithm is fast (avg. 120ms convergence), robust to dataset, and achieves state-of-the-art results on public evaluation datasets.
arXiv Detail & Related papers (2020-08-19T14:26:47Z) - End-to-end Interpretable Learning of Non-blind Image Deblurring [102.75982704671029]
Non-blind image deblurring is typically formulated as a linear least-squares problem regularized by natural priors on the corresponding sharp picture's gradients.
We propose to precondition the Richardson solver using approximate inverse filters of the (known) blur and natural image prior kernels.
arXiv Detail & Related papers (2020-07-03T15:45:01Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z) - Investigating Generalization in Neural Networks under Optimally Evolved
Training Perturbations [46.8676764079206]
We study the generalization properties of neural networks under input perturbations.
We show that minimal training data corruption by a few pixel modifications can cause drastic overfitting.
We propose an evolutionary algorithm to search for optimal pixel perturbations.
arXiv Detail & Related papers (2020-03-14T14:38:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.