A Synthetic Pseudo-Autoencoder Invites Examination of Tacit Assumptions in Neural Network Design
- URL: http://arxiv.org/abs/2506.12076v1
- Date: Fri, 06 Jun 2025 20:32:14 GMT
- Title: A Synthetic Pseudo-Autoencoder Invites Examination of Tacit Assumptions in Neural Network Design
- Authors: Assaf Marron,
- Abstract summary: We present a handcrafted neural network that solves the problem of encoding an arbitrary set of integers into a single numerical variable.<n>While using only standard neural network operations, we make design choices that challenge common notions in this area.<n>This neural net is not intended for practical application.
- Score: 0.9627066153699632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a handcrafted neural network that, without training, solves the seemingly difficult problem of encoding an arbitrary set of integers into a single numerical variable, and then recovering the original elements. While using only standard neural network operations -- weighted sums with biases and identity activation -- we make design choices that challenge common notions in this area around representation, continuity of domains, computation, learnability and more. For example, our construction is designed, not learned; it represents multiple values using a single one by simply concatenating digits without compression, and it relies on hardware-level truncation of rightmost digits as a bit-manipulation mechanism. This neural net is not intended for practical application. Instead, we see its resemblance to -- and deviation from -- standard trained autoencoders as an invitation to examine assumptions that may unnecessarily constrain the development of systems and models based on autoencoding and machine learning. Motivated in part by our research on a theory of biological evolution centered around natural autoencoding of species characteristics, we conclude by refining the discussion with a biological perspective.
Related papers
- Koopman Autoencoders Learn Neural Representation Dynamics [6.393645655578601]
We introduce Koopman autoencoders to capture how neural representations evolve through network layers.<n>Our approach learns a surrogate model that predicts how neural representations transform from input to output.<n>As a practical application, we show how our approach enables targeted class unlearning in the Yin-Yang and MNIST classification tasks.
arXiv Detail & Related papers (2025-05-19T07:35:43Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Toward Neuromic Computing: Neurons as Autoencoders [0.0]
This paper presents the idea that neural backpropagation is using dendritic processing to enable individual neurons to perform autoencoding.
Using a very simple connection weight search and artificial neural network model, the effects of interleaving autoencoding for each neuron in a hidden layer of a feedforward network are explored.
arXiv Detail & Related papers (2024-03-04T18:58:09Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Robust Generalization of Quadratic Neural Networks via Function
Identification [19.87036824512198]
Generalization bounds from learning theory often assume that the test distribution is close to the training distribution.
We show that for quadratic neural networks, we can identify the function represented by the model even though we cannot identify its parameters.
arXiv Detail & Related papers (2021-09-22T18:02:00Z) - A Sparse Coding Interpretation of Neural Networks and Theoretical
Implications [0.0]
Deep convolutional neural networks have achieved unprecedented performance in various computer vision tasks.
We propose a sparse coding interpretation of neural networks that have ReLU activation.
We derive a complete convolutional neural network without normalization and pooling.
arXiv Detail & Related papers (2021-08-14T21:54:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.