Inference of maximum parsimony phylogenetic trees with model-based classical and quantum methods
- URL: http://arxiv.org/abs/2508.00468v1
- Date: Fri, 01 Aug 2025 09:43:12 GMT
- Title: Inference of maximum parsimony phylogenetic trees with model-based classical and quantum methods
- Authors: Jiawei Zhang, Yibo Chen, Yang Zhou, Jun-Han Huang,
- Abstract summary: We design three optimization models compatible with both classical and quantum solvers.<n>Our method directly searches the complete solution space of all possible tree topologies and ancestral states.<n>Our quantum simulations successfully find the exact optimal solutions for small-scale instances.
- Score: 10.13082983732946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The maximum parsimony phylogenetic tree reconstruction problem is NP-hard, presenting a computational bottleneck for classical computing and motivating the exploration of emerging paradigms like quantum computing. To this end, we design three optimization models compatible with both classical and quantum solvers. Our method directly searches the complete solution space of all possible tree topologies and ancestral states, thereby avoiding the potential biases associated with pre-constructing candidate internal nodes. Among these models, the branch-based model drastically reduces the number of variables and explicit constraints through a specific variable definition, providing a novel modeling approach effective not only for phylogenetic tree building but also for other tree problems. The correctness of this model is validated with a classical solver, which obtains solutions that are generally better than those from heuristics on the GAPDH gene dataset. Moreover, our quantum simulations successfully find the exact optimal solutions for small-scale instances with rapid convergence, highlighting the potential of quantum computing to offer a new avenue for solving these intractable problems in evolutionary biology.
Related papers
- BEACONS: Bounded-Error, Algebraically-Composable Neural Solvers for Partial Differential Equations [0.0]
We show how it is possible to circumvent limitations by constructing formally-verified neural network solvers for PDEs.<n>We show how it is possible to construct rigorous extrapolatory bounds on the worst-case Linf errors of shallow neural network approximations.<n>The resulting framework, called BEACONS, comprises both an automatic code-proving for the neural solvers themselves, as well as a bespoke automated theorem-generator system for producing machine-checkable certificates of correctness.
arXiv Detail & Related papers (2026-02-16T15:49:19Z) - Enhanced Distributed Variational Quantum Eigensolver for Large-Scale MaxCut Problem [6.14406949430228]
MaxCut is a canonical NP-hard optimization problem in graph theory with broad applications ranging from physics to bioinformatics.<n> variational quantum algorithms offer promising new approaches that may eventually outperform classical schemes.<n>We propose an enhanced distributed variational quantum eigensolver for large-scale MaxCut problems.
arXiv Detail & Related papers (2025-12-26T15:20:20Z) - Tree-Preconditioned Differentiable Optimization and Axioms as Layers [0.0]
"Axioms-as-Layers" paradigm embeds axiomatic structure of Random Utility Models directly into deep neural networks.<n>"Axioms-as-Layers" paradigm eliminates the structural overfitting inherent in penalty-based methods.
arXiv Detail & Related papers (2025-12-03T04:47:37Z) - Segmentation-Based Regression for Quantum Neural Networks [0.0]
Recent advances in quantum hardware motivate the development of algorithmic frameworks that integrate quantum sampling with classical inference.<n>This work introduces a segmentation-based regression method tailored to quantum neural networks (QNNs)<n>By casting the regression task as a constrained problem over a structured digit lattice, the method replaces continuous inference with interpretable and tractable updates.
arXiv Detail & Related papers (2025-06-27T20:11:43Z) - Quantum-Classical Hybrid Quantized Neural Network [7.759760132559044]
We present a novel Quadratic Binary Optimization (QBO) model for quantized neural network training, enabling the use of arbitrary activation and loss functions.<n>We employ the Quantum Gradient Conditional Descent (QCGD) algorithm, which leverages quantum computing to directly solve the QCBO problem.<n> Experimental results using a coherent Ising machine (CIM) demonstrate a 94.95% accuracy on the Fashion MNIST classification task, with only 1.1-bit precision.
arXiv Detail & Related papers (2025-06-23T02:12:36Z) - PhyloGen: Language Model-Enhanced Phylogenetic Inference via Graph Structure Generation [50.80441546742053]
Phylogenetic trees elucidate evolutionary relationships among species.<n>Traditional Markov Chain Monte Carlo methods face slow convergence and computational burdens.<n>We propose PhyloGen, a novel method leveraging a pre-trained genomic language model.
arXiv Detail & Related papers (2024-12-25T08:33:05Z) - Discovering Physics-Informed Neural Networks Model for Solving Partial Differential Equations through Evolutionary Computation [5.8407437499182935]
This article proposes an evolutionary computation method aimed at discovering the PINNs model with higher approximation accuracy and faster convergence rate.
In experiments, the performance of different models that are searched through Bayesian optimization, random search and evolution is compared in solving Klein-Gordon, Burgers, and Lam'e equations.
arXiv Detail & Related papers (2024-05-18T07:32:02Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - ARTree: A Deep Autoregressive Model for Phylogenetic Inference [6.935130578959931]
We propose a deep autoregressive model for phylogenetic inference based on graph neural networks (GNNs)
We demonstrate the effectiveness and efficiency of our method on a benchmark of challenging real data tree topology density estimation and variational phylogenetic inference problems.
arXiv Detail & Related papers (2023-10-14T10:26:03Z) - A Deep Unrolling Model with Hybrid Optimization Structure for Hyperspectral Image Deconvolution [50.13564338607482]
We propose a novel optimization framework for the hyperspectral deconvolution problem, called DeepMix.<n>It consists of three distinct modules, namely, a data consistency module, a module that enforces the effect of the handcrafted regularizers, and a denoising module.<n>This work proposes a context aware denoising module designed to sustain the advancements achieved by the cooperative efforts of the other modules.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Devolutionary genetic algorithms with application to the minimum
labeling Steiner tree problem [0.0]
This paper characterizes and discusses devolutionary genetic algorithms and evaluates their performances in solving the minimum labeling Steiner tree (MLST) problem.
We define devolutionary algorithms as the process of reaching a feasible solution by devolving a population of super-optimal unfeasible solutions over time.
We show how classical evolutionary concepts, such as crossing, mutation and fitness can be adapted to aim at reaching an optimal or close-to-optimal solution.
arXiv Detail & Related papers (2020-04-18T13:27:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.