Quantum Hamiltonian Learning using Time-Resolved Measurement Data and its Application to Gene Regulatory Network Inference
- URL: http://arxiv.org/abs/2602.19496v1
- Date: Mon, 23 Feb 2026 04:31:20 GMT
- Title: Quantum Hamiltonian Learning using Time-Resolved Measurement Data and its Application to Gene Regulatory Network Inference
- Authors: Mohammad Aamir Sohail, Ranga R. Sudharshan, S. Sandeep Pradhan, Arvind Rao,
- Abstract summary: We present a new Hamiltonian-learning framework based on time-resolved measurement data from a fixed local IC-POVM.<n>We introduce the quantum Hamiltonian-based gene-expression model (QHGM), in which gene interactions are encoded as a parameterized Hamiltonian that governs gene expression over pseudotime.
- Score: 8.614683524257392
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a new Hamiltonian-learning framework based on time-resolved measurement data from a fixed local IC-POVM and its application to inferring gene regulatory networks. We introduce the quantum Hamiltonian-based gene-expression model (QHGM), in which gene interactions are encoded as a parameterized Hamiltonian that governs gene expression evolution over pseudotime. We derive finite-sample recovery guarantees and establish upper bounds on the number of time and measurement samples required for accurate parameter estimation with high probability, scaling polynomially with system size. To recover the QHGM parameters, we develop a scalable variational learning algorithm based on empirical risk minimization. Our method recovers network structure efficiently on synthetic benchmarks and reveals novel, biologically plausible regulatory connections in Glioblastoma single-cell RNA sequencing data, highlighting its potential in cancer research. This framework opens new directions for applying quantum-like modeling to biological systems beyond the limits of classical inference.
Related papers
- Modeling Adoptive Cell Therapy in Bladder Cancer from Sparse Biological Data using PINNs [0.0]
Physics-informed neural networks (PINNs) are neural networks that embed the laws of dynamical systems into their loss function as constraints.<n>In this work, we present a PINN framework applied to oncology.
arXiv Detail & Related papers (2025-10-15T11:28:18Z) - Quantum Network-Based Prediction of Cancer Driver Genes [0.0]
We introduce a supervised quantum framework that combines mutation scores with network topology via a novel state preparation scheme.<n>QMME encodes low-order statistical moments over the mutation scores of a node's immediate and second-order neighbors, and encodes this information into quantum states.<n> Simulations on an empirical PPI network demonstrate competitive performance, with a 12.6% recall gain over a classical baseline.
arXiv Detail & Related papers (2025-10-14T15:25:30Z) - Tensor Network based Gene Regulatory Network Inference for Single-Cell Transcriptomic Data [0.0]
This study introduces a quantum-inspired framework leveraging tensor networks (TNs) to optimally map expression data.<n>We quantify gene dependencies and establish statistical significance via permutation testing.<n>By merging quantum physics inspired techniques with computational biology, our method provides novel insights into gene regulation.
arXiv Detail & Related papers (2025-09-08T17:11:12Z) - A Quantum Platform for Multiomics Data [0.0]
Quantum computing offers a new paradigm for addressing classically intractable problems.<n>We introduce a hybrid quantum-classical machine learning platform designed to bridge this gap.<n>We propose to demonstrate the platform's utility through quantum-enhanced classification of phenotypic states from molecular variables and prediction of temporal evolution in biological systems.
arXiv Detail & Related papers (2025-06-17T00:33:06Z) - Generating new coordination compounds via multireference simulations, genetic algorithms and machine learning: the case of Co(II) molecular magnets [41.94295877935867]
We propose a computational strategy able to accelerate the discovery of new coordination compounds with desired electronic and magnetic properties.<n>Our approach is based on a combination of high- throughput ab initio methods, genetic algorithms and machine learning.<n>We showcase the power of this approach by automatically generating new Co(II) mononuclear coordination compounds with record magnetic properties in a fraction of the time required by either experiments or brute-force ab initio approaches.
arXiv Detail & Related papers (2025-04-18T15:33:48Z) - GENERator: A Long-Context Generative Genomic Foundation Model [66.46537421135996]
We present GENERator, a generative genomic foundation model featuring a context length of 98k base pairs (bp) and 1.2B parameters.<n>Trained on an expansive dataset comprising 386B bp of DNA, the GENERator demonstrates state-of-the-art performance across both established and newly proposed benchmarks.<n>It also shows significant promise in sequence optimization, particularly through the prompt-responsive generation of enhancer sequences with specific activity profiles.
arXiv Detail & Related papers (2025-02-11T05:39:49Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Neural network enhanced measurement efficiency for molecular
groundstates [63.36515347329037]
We adapt common neural network models to learn complex groundstate wavefunctions for several molecular qubit Hamiltonians.
We find that using a neural network model provides a robust improvement over using single-copy measurement outcomes alone to reconstruct observables.
arXiv Detail & Related papers (2022-06-30T17:45:05Z) - A simple normative network approximates local non-Hebbian learning in
the cortex [12.940770779756482]
Neuroscience experiments demonstrate that the processing of sensory inputs by cortical neurons is modulated by instructive signals.
Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.
Online algorithms can be implemented by neural networks whose synaptic learning rules resemble calcium plateau potential dependent plasticity observed in the cortex.
arXiv Detail & Related papers (2020-10-23T20:49:44Z) - HiPPO: Recurrent Memory with Optimal Polynomial Projections [93.3537706398653]
We introduce a general framework (HiPPO) for the online compression of continuous signals and discrete time series by projection onto bases.
Given a measure that specifies the importance of each time step in the past, HiPPO produces an optimal solution to a natural online function approximation problem.
This formal framework yields a new memory update mechanism (HiPPO-LegS) that scales through time to remember all history, avoiding priors on the timescale.
arXiv Detail & Related papers (2020-08-17T23:39:33Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.