Null Space Properties of Neural Networks with Applications to Image
Steganography
- URL: http://arxiv.org/abs/2401.10262v1
- Date: Mon, 1 Jan 2024 03:32:28 GMT
- Title: Null Space Properties of Neural Networks with Applications to Image
Steganography
- Authors: Xiang Li, Kevin M. Short
- Abstract summary: The null space of a given neural network can tell us the part of the input data that makes no contribution to the final prediction.
One application described here leads to a method of image steganography.
- Score: 6.063583864878311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores the null space properties of neural networks. We extend
the null space definition from linear to nonlinear maps and discuss the
presence of a null space in neural networks. The null space of a given neural
network can tell us the part of the input data that makes no contribution to
the final prediction so that we can use it to trick the neural network. This
reveals an inherent weakness in neural networks that can be exploited. One
application described here leads to a method of image steganography. Through
experiments on image datasets such as MNIST, we show that we can use null space
components to force the neural network to choose a selected hidden image class,
even though the overall image can be made to look like a completely different
image. We conclude by showing comparisons between what a human viewer would
see, and the part of the image that the neural network is actually using to
make predictions and, hence, show that what the neural network ``sees'' is
completely different than what we would expect.
Related papers
- Extreme Compression of Adaptive Neural Images [6.646501936980895]
Implicit Neural Representations (INRs) and Neural Fields are a novel paradigm for signal representation, from images and audio to 3D scenes and videos.
We present a novel analysis on compressing neural fields, with the focus on images.
We also introduce Adaptive Neural Images (ANI), an efficient neural representation that enables adaptation to different inference or transmission requirements.
arXiv Detail & Related papers (2024-05-27T03:54:09Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Closed-Form Interpretation of Neural Network Classifiers with Symbolic Gradients [0.7832189413179361]
I introduce a unified framework for finding a closed-form interpretation of any single neuron in an artificial neural network.
I demonstrate how to interpret neural network classifiers to reveal closed-form expressions of the concepts encoded in their decision boundaries.
arXiv Detail & Related papers (2024-01-10T07:47:42Z) - Why do CNNs excel at feature extraction? A mathematical explanation [53.807657273043446]
We introduce a novel model for image classification, based on feature extraction, that can be used to generate images resembling real-world datasets.
In our proof, we construct piecewise linear functions that detect the presence of features, and show that they can be realized by a convolutional network.
arXiv Detail & Related papers (2023-07-03T10:41:34Z) - An Analytical Approach to Compute the Exact Preimage of Feed-Forward
Neural Networks [0.0]
This study is to give a method to compute the exact preimage of any Feed-Forward Neural Network with linear or piecewise linear activation functions for hidden layers.
In contrast to other methods, this one is not returning a unique solution for a unique output but returns analytically the entire and exact preimage.
arXiv Detail & Related papers (2022-02-28T18:12:04Z) - A singular Riemannian geometry approach to Deep Neural Networks II.
Reconstruction of 1-D equivalence classes [78.120734120667]
We build the preimage of a point in the output manifold in the input space.
We focus for simplicity on the case of neural networks maps from n-dimensional real spaces to (n - 1)-dimensional real spaces.
arXiv Detail & Related papers (2021-12-17T11:47:45Z) - Neural Photofit: Gaze-based Mental Image Reconstruction [25.67771238116104]
We propose a novel method that leverages human fixations to visually decode the image a person has in mind into a photofit (facial composite)
Our method combines three neural networks: An encoder, a scoring network, and a decoder.
We show that our method significantly outperforms a mean baseline predictor and report on a human study that shows that we can decode photofits that are visually plausible and close to the observer's mental image.
arXiv Detail & Related papers (2021-08-17T09:11:32Z) - The Representation Theory of Neural Networks [7.724617675868718]
We show that neural networks can be represented via the mathematical theory of quiver representations.
We show that network quivers gently adapt to common neural network concepts.
We also provide a quiver representation model to understand how a neural network creates representations from the data.
arXiv Detail & Related papers (2020-07-23T19:02:14Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.