Tree Tensor Networks Methods for Efficient Calculation of Molecular Vibrational Spectra
- URL: http://arxiv.org/abs/2512.15875v1
- Date: Wed, 17 Dec 2025 19:00:08 GMT
- Title: Tree Tensor Networks Methods for Efficient Calculation of Molecular Vibrational Spectra
- Authors: Shuo Sun, Richard M. Milbradt, Stefan Knecht, Chandan Kumar, Christian B. Mendl,
- Abstract summary: We develop and employ general Tree Networks (TTNs) to compute the vibrational spectra for two model systems.<n>We explore various tree architectures, ranging from the simple linear structure of Matrix Product States (MPS) to trees where only the leaf nodes carry a physical leg.
- Score: 10.741384146354093
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We develop and employ general Tree Tensor Networks (TTNs) to compute the vibrational spectra for two model systems: a set of 64-dimensional coupled oscillators and acetonitrile. We explore various tree architectures, ranging from the simple linear structure of Matrix Product States (MPS), to trees where only the leaf nodes carry a physical leg -- as seen in the underlying ansatz of the Multilayer Multiconfiguration Time-Dependent Hartree (ML-MCTDH) method -- and further to more general trees in which all nodes are allowed to possess a physical leg. In addition, we implement Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) methods and Inverse Iteration methods as eigensolvers. By means of comprehensive benchmarking of runtime and accuracy, we demonstrate that sub-wavenumber accuracy in vibrational spectra is achievable with all TTN structures. MPS and three-legged tree tensor network states (T3NS) have similar runtimes, whereas leaf-only trees require significantly more time. All numerical simulations were performed using PyTreeNet, a Python package designed for flexible tensor network computations.
Related papers
- Efficient Application of Tensor Network Operators to Tensor Network States [8.515845526820852]
We introduce a new algorithm that efficiently applies tree tensor network operators to tree tensor network states.<n>We show how to extend methods commonly used in this context to general tree structures.
arXiv Detail & Related papers (2026-01-27T14:26:37Z) - Tensor Decomposition Networks for Fast Machine Learning Interatomic Potential Computations [48.46721044282335]
tensor decomposition networks (TDNs) achieve competitive performance with dramatic speedup in computations.<n>We evaluate TDNs on PubChemQCR, a newly curated molecular relaxation dataset containing 105 million DFT-calculated snapshots.<n>Results show that TDNs achieve competitive performance with dramatic speedup in computations.
arXiv Detail & Related papers (2025-07-01T18:46:27Z) - Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices [88.33936714942996]
We present a unifying framework that enables searching among all linear operators expressible via an Einstein summation.
We show that differences in the compute-optimal scaling laws are mostly governed by a small number of variables.
We find that Mixture-of-Experts (MoE) learns an MoE in every single linear layer of the model, including the projection in the attention blocks.
arXiv Detail & Related papers (2024-10-03T00:44:50Z) - Optimal Tree Tensor Network Operators for Tensor Network Simulations: Applications to Open Quantum Systems [0.0]
Tree tensor network states (TTNS) decompose the system wavefunction to the product of low-rank tensors.
We present an algorithm that automatically constructs the optimal and exact tree tensor network operators (TTNO) for any sum-of-product symbolic quantum operator.
arXiv Detail & Related papers (2024-07-18T02:15:52Z) - Terminating Differentiable Tree Experts [77.2443883991608]
We propose a neuro-symbolic Differentiable Tree Machine that learns tree operations using a combination of transformers and Representation Products.
We first remove a series of different transformer layers that are used in every step by introducing a mixture of experts.
We additionally propose a new termination algorithm to provide the model the power to choose how many steps to make automatically.
arXiv Detail & Related papers (2024-07-02T08:45:38Z) - Differentiable Tree Operations Promote Compositional Generalization [106.59434079287661]
Differentiable Tree Machine (DTM) architecture integrates interpreter with external memory and agent that learns to sequentially select tree operations.
DTM achieves 100% while existing baselines such as Transformer, Tree Transformer, LSTM, and Tree2Tree LSTM achieve less than 30%.
arXiv Detail & Related papers (2023-06-01T14:46:34Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Dynamic Probabilistic Pruning: A general framework for
hardware-constrained pruning at different granularities [80.06422693778141]
We propose a flexible new pruning mechanism that facilitates pruning at different granularities (weights, kernels, filters/feature maps)
We refer to this algorithm as Dynamic Probabilistic Pruning (DPP)
We show that DPP achieves competitive compression rates and classification accuracy when pruning common deep learning models trained on different benchmark datasets for image classification.
arXiv Detail & Related papers (2021-05-26T17:01:52Z) - Spectral Top-Down Recovery of Latent Tree Models [13.681975313065477]
Spectral Top-Down Recovery (STDR) is a divide-and-conquer approach for inference of large latent tree models.
STDR's partitioning step is non-random. Instead, it is based on the Fiedler vector of a suitable Laplacian matrix related to the observed nodes.
We prove that STDR is statistically consistent, and bound the number of samples required to accurately recover the tree with high probability.
arXiv Detail & Related papers (2021-02-26T02:47:42Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - A Flexible Pipeline for the Optimization of CSG Trees [3.622365857213782]
CSG trees are an intuitive, yet powerful technique for the representation of geometry using a combination of Boolean set-operations and geometric primitives.
We present a systematic comparison of newly developed and existing tree optimization methods and propose a flexible processing pipeline with a focus on tree editability.
arXiv Detail & Related papers (2020-08-09T06:45:10Z) - The Tree Ensemble Layer: Differentiability meets Conditional Computation [8.40843862024745]
We introduce a new layer for neural networks composed of an ensemble of differentiable decision trees (a.k.a. soft trees)
Differentiable trees demonstrate promising results in the literature, but are typically slow in training and inference as they do not support conditional computation.
We develop specialized forward and backward propagation algorithms that exploit sparsity.
arXiv Detail & Related papers (2020-02-18T18:05:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.