Asymptotically Stable Quaternion-valued Hopfield-structured Neural Network with Periodic Projection-based Supervised Learning Rules
- URL: http://arxiv.org/abs/2510.16607v1
- Date: Sat, 18 Oct 2025 18:10:07 GMT
- Title: Asymptotically Stable Quaternion-valued Hopfield-structured Neural Network with Periodic Projection-based Supervised Learning Rules
- Authors: Tianwei Wang, Xinhui Ma, Wei Pang,
- Abstract summary: We propose a quaternion-valued supervised learning Hopfield-structured neural network (QSHNN) with a fully connected structure inspired by the classic Hopfield neural network (HNN)<n>For the learning rules, we introduce a periodic projection strategy that modifies standard descent by periodically projecting each 4*4 block of the weight matrix onto the closest quaternionic structure in the least-squares sense.<n>Benefiting from this rigorous mathematical foundation, the experimental model implementation achieves high accuracy, fast convergence, and strong reliability across randomly generated target sets.
- Score: 3.869763264003111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by the geometric advantages of quaternions in representing rotations and postures, we propose a quaternion-valued supervised learning Hopfield-structured neural network (QSHNN) with a fully connected structure inspired by the classic Hopfield neural network (HNN). Starting from a continuous-time dynamical model of HNNs, we extend the formulation to the quaternionic domain and establish the existence and uniqueness of fixed points with asymptotic stability. For the learning rules, we introduce a periodic projection strategy that modifies standard gradient descent by periodically projecting each 4*4 block of the weight matrix onto the closest quaternionic structure in the least-squares sense. This approach preserves both convergence and quaternionic consistency throughout training. Benefiting from this rigorous mathematical foundation, the experimental model implementation achieves high accuracy, fast convergence, and strong reliability across randomly generated target sets. Moreover, the evolution trajectories of the QSHNN exhibit well-bounded curvature, i.e., sufficient smoothness, which is crucial for applications such as control systems or path planning modules in robotic arms, where joint postures are parameterized by quaternion neurons. Beyond these application scenarios, the proposed model offers a practical implementation framework and a general mathematical methodology for designing neural networks under hypercomplex or non-commutative algebraic structures.
Related papers
- ManifoldFormer: Geometric Deep Learning for Neural Dynamics on Riemannian Manifolds [11.275535457399625]
Existing EEG foundation models mainly treat neural signals as generic time series in Euclidean space.<n>MandelaFormer addresses this limitation through a novel geometric deep learning framework that explicitly learns neural manifold representations.
arXiv Detail & Related papers (2025-11-20T22:19:53Z) - PointNSP: Autoregressive 3D Point Cloud Generation with Next-Scale Level-of-Detail Prediction [87.33016661440202]
Autoregressive point cloud generation has long lagged behind diffusion-based approaches in quality.<n>We propose PointNSP, a coarse-to-fine generative framework that preserves global shape structure at low resolutions.<n> Experiments on ShapeNet show that PointNSP establishes state-of-the-art (SOTA) generation quality for the first time within the autoregressive paradigm.
arXiv Detail & Related papers (2025-10-07T06:31:02Z) - Quaternion Approximation Networks for Enhanced Image Classification and Oriented Object Detection [2.847742374860449]
Quaternion Approximate Networks (QUAN) is a novel deep learning framework that leverages quaternion algebra for rotation equivariant image classification and object detection.<n>Quaternion Approximate Networks (QUAN) is evaluated on image classification (CIFAR-10/100, ImageNet), object detection (COCO, DOTA), and robotic perception tasks.<n>These results highlight its potential for deployment in resource-constrained robotic systems requiring rotation-aware perception and application in other domains.
arXiv Detail & Related papers (2025-09-05T21:41:40Z) - Geometry-Aware Spiking Graph Neural Network [24.920334588995072]
We propose a Geometry-Aware Spiking Graph Neural Network that unifies spike-based neural dynamics with adaptive representation learning.<n>Experiments on multiple benchmarks show that GSG achieves superior accuracy, robustness, and energy efficiency compared to both Euclidean SNNs and manifold-based GNNs.
arXiv Detail & Related papers (2025-08-09T02:52:38Z) - Spatiotemporal Graph Learning with Direct Volumetric Information Passing and Feature Enhancement [62.91536661584656]
We propose a dual-module framework, Cell-embedded and Feature-enhanced Graph Neural Network (aka, CeFeGNN) for learning.<n>We embed learnable cell attributions to the common node-edge message passing process, which better captures the spatial dependency of regional features.<n>Experiments on various PDE systems and one real-world dataset demonstrate that CeFeGNN achieves superior performance compared with other baselines.
arXiv Detail & Related papers (2024-09-26T16:22:08Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Contextualizing MLP-Mixers Spatiotemporally for Urban Data Forecast at Scale [54.15522908057831]
We propose an adapted version of the computationally-Mixer for STTD forecast at scale.
Our results surprisingly show that this simple-yeteffective solution can rival SOTA baselines when tested on several traffic benchmarks.
Our findings contribute to the exploration of simple-yet-effective models for real-world STTD forecasting.
arXiv Detail & Related papers (2023-07-04T05:19:19Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - A Neural Network-enhanced Reproducing Kernel Particle Method for
Modeling Strain Localization [0.0]
In this work, neural network-enhanced reproducing kernel particle method (NN-RKPM) is proposed.
The location, orientation, and shape of the solution transition near a localization is automatically captured by the NN approximation.
The effectiveness of the proposed NN-RKPM is verified by a series of numerical verifications.
arXiv Detail & Related papers (2022-04-28T23:59:38Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Symplectic Neural Networks in Taylor Series Form for Hamiltonian Systems [15.523425139375226]
We propose an effective and lightweight learning algorithm, Symplectic Taylor Neural Networks (Taylor-nets)
We conduct continuous, long-term predictions of a complex Hamiltonian dynamic system based on sparse, short-term observations.
We demonstrate the efficacy of our Taylor-net in predicting a broad spectrum of Hamiltonian dynamic systems, including the pendulum, the Lotka--Volterra, the Kepler, and the H'enon--Heiles systems.
arXiv Detail & Related papers (2020-05-11T10:32:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.