Neural Field Turing Machine: A Differentiable Spatial Computer
- URL: http://arxiv.org/abs/2509.03370v1
- Date: Wed, 27 Aug 2025 22:29:15 GMT
- Title: Neural Field Turing Machine: A Differentiable Spatial Computer
- Authors: Akash Malhotra, Nacéra Seghouani,
- Abstract summary: We introduce the Neural Field Turing Machine (NFTM), a differentiable architecture that unifies symbolic computation, physical simulation, and perceptual inference within continuous spatial fields.<n> NFTM combines a neural controller, continuous memory field, and movable read/write heads that perform local updates.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We introduce the Neural Field Turing Machine (NFTM), a differentiable architecture that unifies symbolic computation, physical simulation, and perceptual inference within continuous spatial fields. NFTM combines a neural controller, continuous memory field, and movable read/write heads that perform local updates. At each timestep, the controller reads local patches, computes updates via learned rules, and writes them back while updating head positions. This design achieves linear O(N) scaling through fixed-radius neighborhoods while maintaining Turing completeness under bounded error. We demonstrate three example instantiations of NFTM: cellular automata simulation (Rule 110), physics-informed PDE solvers (2D heat equation), and iterative image refinement (CIFAR-10 inpainting). These instantiations learn local update rules that compose into global dynamics, exhibit stable long-horizon rollouts, and generalize beyond training horizons. NFTM provides a unified computational substrate bridging discrete algorithms and continuous field dynamics within a single differentiable framework.
Related papers
- Continuous-Time Homeostatic Dynamics for Reentrant Inference Models [0.0]
We formulate the Fast-Weights Homeostatic Reentry Network as a continuous-time neural-ODE system.<n>The dynamics admit bounded attractors governed by an energy functional, yielding a ring-like manifold.<n>Unlike continuous-time recurrent neural networks or liquid neural networks, FHRN achieves stability through population-level gain modulation rather than fixed recurrence or neuron-local time adaptation.
arXiv Detail & Related papers (2025-12-04T07:33:13Z) - Improving Long-Range Interactions in Graph Neural Simulators via Hamiltonian Dynamics [71.53370807809296]
Recent Graph Neural Simulators (GNSs) accelerate simulations by learning dynamics on graph-structured data.<n>We propose Information-preserving Graph Neural Simulators (IGNS), a graph-based neural simulator built on the principles of Hamiltonian dynamics.<n>IGNS consistently outperforms state-of-the-art GNSs, achieving higher accuracy and stability under challenging and complex dynamical systems.
arXiv Detail & Related papers (2025-11-11T12:53:56Z) - Quantum-Inspired Differentiable Integral Neural Networks (QIDINNs): A Feynman-Based Architecture for Continuous Learning Over Streaming Data [0.0]
Real-time continuous learning over streaming data remains a central challenge in deep learning and AI systems.<n>We introduce a novel architecture, Quantum-Inspired Differentiable Neural Integral Networks (QIDINNs)<n>QIDINNs leverage the Feynman technique of differentiation under the integral sign to formulate neural updates as integrals over historical data.
arXiv Detail & Related papers (2025-06-13T11:00:31Z) - NoiseNCA: Noisy Seed Improves Spatio-Temporal Continuity of Neural Cellular Automata [23.73063532045145]
NCA is a class of Cellular Automata where the update rule is parameterized by a neural network.
We show that existing NCA models tend to overfit the training discretization.
We propose a solution that utilizes uniform noise as the initial condition.
arXiv Detail & Related papers (2024-04-09T13:02:33Z) - NeuralClothSim: Neural Deformation Fields Meet the Thin Shell Theory [70.10550467873499]
We propose NeuralClothSim, a new quasistatic cloth simulator using thin shells.
Our memory-efficient solver operates on a new continuous coordinate-based surface representation called neural deformation fields.
arXiv Detail & Related papers (2023-08-24T17:59:54Z) - General Neural Gauge Fields [100.35916421218101]
We develop a learning framework to jointly optimize gauge transformations and neural fields.
We derive an information-invariant gauge transformation which allows to preserve scene information inherently and yield superior performance.
arXiv Detail & Related papers (2023-05-05T12:08:57Z) - ETLP: Event-based Three-factor Local Plasticity for online learning with
neuromorphic hardware [105.54048699217668]
We show a competitive performance in accuracy with a clear advantage in the computational complexity for Event-Based Three-factor Local Plasticity (ETLP)
We also show that when using local plasticity, threshold adaptation in spiking neurons and a recurrent topology are necessary to learntemporal patterns with a rich temporal structure.
arXiv Detail & Related papers (2023-01-19T19:45:42Z) - Multi-Tones' Phase Coding (MTPC) of Interaural Time Difference by
Spiking Neural Network [68.43026108936029]
We propose a pure spiking neural network (SNN) based computational model for precise sound localization in the noisy real-world environment.
We implement this algorithm in a real-time robotic system with a microphone array.
The experiment results show a mean error azimuth of 13 degrees, which surpasses the accuracy of the other biologically plausible neuromorphic approach for sound source localization.
arXiv Detail & Related papers (2020-07-07T08:22:56Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - A provably stable neural network Turing Machine [13.615420026818038]
We introduce a neural stack architecture, including a differentiable parametrized stack operator that approximates stack push and pop operations.
Using the neural stack with a recurrent neural network, we introduce a neural network Pushdown Automaton (nnPDA) and prove that nnPDA with finite/bounded neurons and time can simulate any PDA.
We prove that differentiable nnTM with bounded neurons can simulate Turing Machine (TM) in real-time.
arXiv Detail & Related papers (2020-06-05T19:45:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.