Optimal spanning tree reconstruction in symbolic regression
- URL: http://arxiv.org/abs/2406.18612v1
- Date: Tue, 25 Jun 2024 13:22:13 GMT
- Title: Optimal spanning tree reconstruction in symbolic regression
- Authors: Radoslav G. Neychev, Innokentiy A. Shibaev, Vadim V. Strijov,
- Abstract summary: A model is a superposition of primitive functions.
The proposed algorithm reconstructs theminimum spanning tree from theweighted colored graph.
This paper presents a novel solution based on the prize-collecting Steiner tree algorithm.
- Score: 2.553456266022125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates the problem of regression model generation. A model is a superposition of primitive functions. The model structure is described by a weighted colored graph. Each graph vertex corresponds to some primitive function. An edge assigns a superposition of two functions. The weight of an edge equals the probability of superposition. To generate an optimal model one has to reconstruct its structure from its graph adjacency matrix. The proposed algorithm reconstructs the~minimum spanning tree from the~weighted colored graph. This paper presents a novel solution based on the prize-collecting Steiner tree algorithm. This algorithm is compared with its alternatives.
Related papers
- Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - More on greedy construction heuristics for the MAX-CUT problem [8.148355685823521]
We show that this picture helps to classify the main greedys for the maximum cut problem.
All versions of the Sahni-Gonzalez(SG) algorithms could be classified as the Prim class.
Various Edge-Contraction(EC) algorithms are of the Kruskal class.
arXiv Detail & Related papers (2023-12-18T02:52:04Z) - Hierarchical cycle-tree packing model for $K$-core attack problem [0.0]
A hierarchical cycle-tree packing model is introduced here for this challenging optimization problem.
We analyze this model through the replica-symmetric cavity method of statistical physics.
The associated hierarchical cycle-tree guided attack (tt hCTGA) is able to construct nearly optimal attack solutions for regular random graphs.
arXiv Detail & Related papers (2023-03-02T06:47:33Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Learning to Learn Graph Topologies [27.782971146122218]
We learn a mapping from node data to the graph structure based on the idea of learning to optimise (L2O)
The model is trained in an end-to-end fashion with pairs of node data and graph samples.
Experiments on both synthetic and real-world data demonstrate that our model is more efficient than classic iterative algorithms in learning a graph with specific topological properties.
arXiv Detail & Related papers (2021-10-19T08:42:38Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Pyramidal Reservoir Graph Neural Network [18.632681846787246]
We propose a deep Graph Neural Network (GNN) model that alternates two types of layers.
We show how graph pooling can reduce the computational complexity of the model.
Our proposed approach to the design of RC-based GNNs offers an advantageous and principled trade-off between accuracy and complexity.
arXiv Detail & Related papers (2021-04-10T08:34:09Z) - SGA: A Robust Algorithm for Partial Recovery of Tree-Structured
Graphical Models with Noisy Samples [75.32013242448151]
We consider learning Ising tree models when the observations from the nodes are corrupted by independent but non-identically distributed noise.
Katiyar et al. (2020) showed that although the exact tree structure cannot be recovered, one can recover a partial tree structure.
We propose Symmetrized Geometric Averaging (SGA), a more statistically robust algorithm for partial tree recovery.
arXiv Detail & Related papers (2021-01-22T01:57:35Z) - Non-Parametric Graph Learning for Bayesian Graph Neural Networks [35.88239188555398]
We propose a novel non-parametric graph model for constructing the posterior distribution of graph adjacency matrices.
We demonstrate the advantages of this model in three different problem settings: node classification, link prediction and recommendation.
arXiv Detail & Related papers (2020-06-23T21:10:55Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.