A logical re-conception of neural networks: Hamiltonian bitwise part-whole architecture
- URL: http://arxiv.org/abs/2602.04911v1
- Date: Wed, 04 Feb 2026 01:16:37 GMT
- Title: A logical re-conception of neural networks: Hamiltonian bitwise part-whole architecture
- Authors: E Bowen, R Granger, A Rodriguez,
- Abstract summary: We introduce a simple initial working system in which relations are directly represented via an architecture with operating and learning rules.<n>A novel graph-Hamiltonian operator calculates energies among encodings, with ground states denoting simultaneous satisfaction of all relation constraints.<n>The resulting unconventional architecture can process standard ANN examples, but also produces representations that exhibit characteristics of symbolic computation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce a simple initial working system in which relations (such as part-whole) are directly represented via an architecture with operating and learning rules fundamentally distinct from standard artificial neural network methods. Arbitrary data are straightforwardly encoded as graphs whose edges correspond to codes from a small fixed primitive set of elemental pairwise relations, such that simple relational encoding is not an add-on, but occurs intrinsically within the most basic components of the system. A novel graph-Hamiltonian operator calculates energies among these encodings, with ground states denoting simultaneous satisfaction of all relation constraints among graph vertices. The method solely uses radically low-precision arithmetic; computational cost is correspondingly low, and scales linearly with the number of edges in the data. The resulting unconventional architecture can process standard ANN examples, but also produces representations that exhibit characteristics of symbolic computation. Specifically, the method identifies simple logical relational structures in these data (part-of; next-to), building hierarchical representations that enable abductive inferential steps generating relational position-based encodings, rather than solely statistical representations. Notably, an equivalent set of ANN operations are derived, identifying a special case of embedded vector encodings that may constitute a useful approach to current work in higher-level semantic representation. The very simple current state of the implemented system invites additional tools and improvements.
Related papers
- Generalized Simplicial Attention Neural Networks [22.171364354867723]
We introduce Generalized Simplicial Attention Neural Networks (GSANs)
GSANs process data living on simplicial complexes using masked self-attentional layers.
These schemes learn how to combine data associated with neighbor simplices of consecutive order in a task-oriented fashion.
arXiv Detail & Related papers (2023-09-05T11:29:25Z) - Contextualizing MLP-Mixers Spatiotemporally for Urban Data Forecast at Scale [54.15522908057831]
We propose an adapted version of the computationally-Mixer for STTD forecast at scale.
Our results surprisingly show that this simple-yeteffective solution can rival SOTA baselines when tested on several traffic benchmarks.
Our findings contribute to the exploration of simple-yet-effective models for real-world STTD forecasting.
arXiv Detail & Related papers (2023-07-04T05:19:19Z) - Symbolic Synthesis of Neural Networks [0.0]
I present Graph-basedally Synthesized Neural Networks (GSSNNs)
GSSNNs are a form of neural network whose topology and parameters are informed by the output of a symbolic program.
I demonstrate that by developing symbolic abstractions at a population level, I can elicit reliable patterns of improved generalization with small quantities of data known to contain local and discrete features.
arXiv Detail & Related papers (2023-03-06T18:13:14Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Simplicial Attention Networks [0.0]
We introduce a proper self-attention mechanism able to process data components at different layers.
We learn how to weight both upper and lower neighborhoods of the given topological domain in a totally task-oriented fashion.
The proposed approach compares favorably with other methods when applied to different (inductive and transductive) tasks.
arXiv Detail & Related papers (2022-03-14T20:47:31Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Principled Simplicial Neural Networks for Trajectory Prediction [17.016397531234393]
We consider the construction of neural network architectures for data on simplicial complexes.
Based on these properties, we propose a simple convolutional architecture for the problem of trajectory prediction.
We show that it obeys all three of these properties when an odd, nonlinear activation function is used.
arXiv Detail & Related papers (2021-02-19T17:37:43Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.