Quantum-Inspired Differentiable Integral Neural Networks (QIDINNs): A Feynman-Based Architecture for Continuous Learning Over Streaming Data
- URL: http://arxiv.org/abs/2506.12111v1
- Date: Fri, 13 Jun 2025 11:00:31 GMT
- Title: Quantum-Inspired Differentiable Integral Neural Networks (QIDINNs): A Feynman-Based Architecture for Continuous Learning Over Streaming Data
- Authors: Oscar Boullosa Dapena,
- Abstract summary: Real-time continuous learning over streaming data remains a central challenge in deep learning and AI systems.<n>We introduce a novel architecture, Quantum-Inspired Differentiable Neural Integral Networks (QIDINNs)<n>QIDINNs leverage the Feynman technique of differentiation under the integral sign to formulate neural updates as integrals over historical data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-time continuous learning over streaming data remains a central challenge in deep learning and AI systems. Traditional gradient-based models such as backpropagation through time (BPTT) face computational and stability limitations when dealing with temporally unbounded data. In this paper, we introduce a novel architecture, Quantum-Inspired Differentiable Integral Neural Networks (QIDINNs), which leverages the Feynman technique of differentiation under the integral sign to formulate neural updates as integrals over historical data. This reformulation allows for smoother, more stable learning dynamics that are both physically interpretable and computationally tractable. Inspired by Feynman's path integral formalism and compatible with quantum gradient estimation frameworks, QIDINNs open a path toward hybrid classical-quantum neural computation. We demonstrate our model's effectiveness on synthetic and real-world streaming tasks, and we propose directions for quantum extensions and scalable implementations.
Related papers
- Quantum-Informed Machine Learning for Chaotic Systems [0.8110978727364399]
We introduce a quantum-informed machine learning framework for learning partial differential equations.<n>A quantum circuit Born machine is employed to learn the invariant properties of chaotic dynamical systems.<n>The framework is evaluated on three representative systems: the Kuramoto-Sivashinsky equation, two-dimensional Kolmogorov flow and turbulent channel flow.
arXiv Detail & Related papers (2025-07-26T08:36:16Z) - Fractional Spike Differential Equations Neural Network with Efficient Adjoint Parameters Training [63.3991315762955]
Spiking Neural Networks (SNNs) draw inspiration from biological neurons to create realistic models for brain-like computation.<n>Most existing SNNs assume a single time constant for neuronal membrane voltage dynamics, modeled by first-order ordinary differential equations (ODEs) with Markovian characteristics.<n>We propose the Fractional SPIKE Differential Equation neural network (fspikeDE), which captures long-term dependencies in membrane voltage and spike trains through fractional-order dynamics.
arXiv Detail & Related papers (2025-07-22T18:20:56Z) - Physics Informed Constrained Learning of Dynamics from Static Data [8.346864633675414]
A physics-informed neural network (PINN) models the dynamics of a system by integrating the governing physical laws into the architecture of a neural network.<n>Existing PINN frameworks rely on fully observed time-course data, the acquisition of which could be prohibitive for many systems.<n>In this study, we developed a new PINN learning paradigm, namely Constrained Learning, that enables the approximation of first-order derivatives or motions using non-time course or partially observed data.
arXiv Detail & Related papers (2025-04-17T06:06:53Z) - Exploring quantum localization with machine learning [39.58317527488534]
We introduce an efficient neural network (NN) architecture for classifying wave functions in terms of their localization.
Our approach integrates a versatile quantum phase space parametrization leading to a custom 'quantum' NN, with the pattern recognition capabilities of a modified convolutional model.
arXiv Detail & Related papers (2024-06-01T08:50:26Z) - From Fourier to Neural ODEs: Flow Matching for Modeling Complex Systems [20.006163951844357]
We propose a simulation-free framework for training neural ordinary differential equations (NODEs)
We employ the Fourier analysis to estimate temporal and potential high-order spatial gradients from noisy observational data.
Our approach outperforms state-of-the-art methods in terms of training time, dynamics prediction, and robustness.
arXiv Detail & Related papers (2024-05-19T13:15:23Z) - Data-Driven Characterization of Latent Dynamics on Quantum Testbeds [0.23408308015481663]
We augment the dynamical equation of quantum systems described by the Lindblad master equation with a parameterized source term.
We consider a structure preserving augmentation that learns and distinguishes unitary from dissipative latent dynamics parameterized by a basis of linear operators.
We demonstrate that our interpretable, structure preserving, and nonlinear models are able to improve the prediction accuracy of the Lindblad master equation.
arXiv Detail & Related papers (2024-01-18T09:28:44Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - Contextualizing MLP-Mixers Spatiotemporally for Urban Data Forecast at Scale [54.15522908057831]
We propose an adapted version of the computationally-Mixer for STTD forecast at scale.
Our results surprisingly show that this simple-yeteffective solution can rival SOTA baselines when tested on several traffic benchmarks.
Our findings contribute to the exploration of simple-yet-effective models for real-world STTD forecasting.
arXiv Detail & Related papers (2023-07-04T05:19:19Z) - Towards Quantum Graph Neural Networks: An Ego-Graph Learning Approach [47.19265172105025]
We propose a novel hybrid quantum-classical algorithm for graph-structured data, which we refer to as the Ego-graph based Quantum Graph Neural Network (egoQGNN)
egoQGNN implements the GNN theoretical framework using the tensor product and unity matrix representation, which greatly reduces the number of model parameters required.
The architecture is based on a novel mapping from real-world data to Hilbert space.
arXiv Detail & Related papers (2022-01-13T16:35:45Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.