Nonlinear Discrete Optimisation of Reversible Steganographic Coding
- URL: http://arxiv.org/abs/2202.13133v1
- Date: Sat, 26 Feb 2022 13:02:32 GMT
- Title: Nonlinear Discrete Optimisation of Reversible Steganographic Coding
- Authors: Ching-Chun Chang
- Abstract summary: Steganographic distortion might be inadmissible in fidelity-sensitive situations.
In this study, we formulate reversible steganographic coding as a nonlinear discrete optimisation problem.
Linearisation techniques are developed to enable mixed-integer linear programming.
- Score: 0.7614628596146599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Authentication mechanisms are at the forefront of defending the world from
various types of cybercrime. Steganography can serve as an authentication
solution by embedding a digital signature into a carrier object to ensure the
integrity of the object and simultaneously lighten the burden of metadata
management. However, steganographic distortion, albeit generally imperceptible
to human sensory systems, might be inadmissible in fidelity-sensitive
situations. This has led to the concept of reversible steganography. A
fundamental element of reversible steganography is predictive analytics, for
which powerful neural network models have been effectively deployed. As another
core aspect, contemporary reversible steganographic coding is based primarily
on heuristics and therefore worth further study. While attempts have been made
to realise automatic coding with neural networks, perfect reversibility is
still unreachable via such an unexplainable intelligent machinery. Instead of
relying on deep learning, we aim to derive an optimal coding by means of
mathematical optimisation. In this study, we formulate reversible
steganographic coding as a nonlinear discrete optimisation problem with a
logarithmic capacity constraint and a quadratic distortion objective.
Linearisation techniques are developed to enable mixed-integer linear
programming. Experimental results validate the near-optimality of the proposed
optimisation algorithm benchmarked against a brute-force method.
Related papers
- Nonlinear Computation with Linear Optics via Source-Position Encoding [0.0]
We introduce a novel method to achieve nonlinear computation in fully linear media.
Our method can operate at low power and requires only the ability to drive the optical system at a data-dependent spatial position.
We formulate a fully automated, topology-optimization-based hardware design framework for extremely specialized optical neural networks.
arXiv Detail & Related papers (2025-04-29T03:55:05Z) - Model-Agnostic Zeroth-Order Policy Optimization for Meta-Learning of Ergodic Linear Quadratic Regulators [13.343937277604892]
We study the problem of using meta-learning to deal with uncertainty and heterogeneity in ergodic linear quadratic regulators.
We propose an algorithm that omits the estimation of policy Hessian, which applies to tasks of learning a set of heterogeneous but similar linear dynamic systems.
We provide a convergence result for the exact gradient descent process by analyzing the boundedness and smoothness of the gradient for the meta-objective.
arXiv Detail & Related papers (2024-05-27T17:26:36Z) - The END: An Equivariant Neural Decoder for Quantum Error Correction [73.4384623973809]
We introduce a data efficient neural decoder that exploits the symmetries of the problem.
We propose a novel equivariant architecture that achieves state of the art accuracy compared to previous neural decoders.
arXiv Detail & Related papers (2023-04-14T19:46:39Z) - Fundamental Limits of Two-layer Autoencoders, and Achieving Them with
Gradient Methods [91.54785981649228]
This paper focuses on non-linear two-layer autoencoders trained in the challenging proportional regime.
Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods.
For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders.
arXiv Detail & Related papers (2022-12-27T12:37:34Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Less is More: Reversible Steganography with Uncertainty-Aware Predictive
Analytics [12.10752011938661]
Residual modulation is recognised as the most advanced reversible steganographic algorithm for digital images.
This paper analyses the predictive uncertainty and endows the predictive module with the option to abstain when encountering a high level of uncertainty.
arXiv Detail & Related papers (2022-02-05T09:04:50Z) - Predictive coding, precision and natural gradients [2.1601966913620325]
We show that hierarchical predictive coding networks with learnable precision are able to solve various supervised and unsupervised learning tasks.
When applied to unsupervised auto-encoding of image inputs, the deterministic network produces hierarchically organized and disentangled embeddings.
arXiv Detail & Related papers (2021-11-12T21:05:03Z) - A Sparse Coding Interpretation of Neural Networks and Theoretical
Implications [0.0]
Deep convolutional neural networks have achieved unprecedented performance in various computer vision tasks.
We propose a sparse coding interpretation of neural networks that have ReLU activation.
We derive a complete convolutional neural network without normalization and pooling.
arXiv Detail & Related papers (2021-08-14T21:54:47Z) - Deep Learning for Reversible Steganography: Principles and Insights [31.305695595971827]
reversible steganography has emerged as a promising research paradigm.
Recent approach is to carve up the steganographic system and work on modules independently.
In this paper, we investigate the modular framework and deploy deep neural networks in a reversible steganographic scheme.
arXiv Detail & Related papers (2021-06-13T05:32:17Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.