Bounds of the sum of edge lengths in linear arrangements of trees
- URL: http://arxiv.org/abs/2006.14069v3
- Date: Sun, 14 Feb 2021 17:39:41 GMT
- Title: Bounds of the sum of edge lengths in linear arrangements of trees
- Authors: Ramon Ferrer-i-Cancho, Carlos G\'omez-Rodr\'iguez and Juan Luis
Esteban
- Abstract summary: In particular, we investigate various problems on the sum of edge lengths in trees of a fixed size.
We establish some foundations for research on optimality scores for spatial networks in one dimension.
- Score: 0.90238471756546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A fundamental problem in network science is the normalization of the
topological or physical distance between vertices, that requires understanding
the range of variation of the unnormalized distances. Here we investigate the
limits of the variation of the physical distance in linear arrangements of the
vertices of trees. In particular, we investigate various problems on the sum of
edge lengths in trees of a fixed size: the minimum and the maximum value of the
sum for specific trees, the minimum and the maximum in classes of trees (bistar
trees and caterpillar trees) and finally the minimum and the maximum for any
tree. We establish some foundations for research on optimality scores for
spatial networks in one dimension.
Related papers
- The Central Spanning Tree Problem [20.14154858576556]
Spanning trees are an important primitive in many data analysis tasks, when a data set needs to be summarized in terms of its "skeleton"
We show empirically that the (branched) central spanning tree is more robust to noise in the data, and as such is better suited to summarize a data set in terms of its skeleton.
arXiv Detail & Related papers (2024-04-09T16:49:42Z) - Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - Data Topology-Dependent Upper Bounds of Neural Network Widths [52.58441144171022]
We first show that a three-layer neural network can be designed to approximate an indicator function over a compact set.
This is then extended to a simplicial complex, deriving width upper bounds based on its topological structure.
We prove the universal approximation property of three-layer ReLU networks using our topological approach.
arXiv Detail & Related papers (2023-05-25T14:17:15Z) - Learning Ultrametric Trees for Optimal Transport Regression [10.524752369156337]
We find an optimal tree structure for a given discrete metric space.
One of our key ideas is to cast the problem in ultrametric spaces.
arXiv Detail & Related papers (2022-10-21T22:54:42Z) - The Maximum Linear Arrangement Problem for trees under projectivity and
planarity [0.90238471756546]
A linear arrangement is a mapping $pi$ from the $n$ vertices of a graph $G$ to $n$ distinct consecutive integers.
Here we study two variants of the Maximum Linear Arrangement problem (MaxLA), which consists of finding an arrangement that maximizes the cost.
In this paper we present algorithms that are linear in time and space to solve planar and projective MaxLA for trees.
arXiv Detail & Related papers (2022-06-14T15:43:44Z) - Lassoed Tree Boosting [53.56229983630983]
We prove that a gradient boosted tree algorithm with early stopping faster than $n-1/4$ L2 convergence in the large nonparametric space of cadlag functions of bounded sectional variation.
Our convergence proofs are based on a novel, general theorem on early stopping with empirical loss minimizers of nested Donsker classes.
arXiv Detail & Related papers (2022-05-22T00:34:41Z) - Spectral Top-Down Recovery of Latent Tree Models [13.681975313065477]
Spectral Top-Down Recovery (STDR) is a divide-and-conquer approach for inference of large latent tree models.
STDR's partitioning step is non-random. Instead, it is based on the Fiedler vector of a suitable Laplacian matrix related to the observed nodes.
We prove that STDR is statistically consistent, and bound the number of samples required to accurately recover the tree with high probability.
arXiv Detail & Related papers (2021-02-26T02:47:42Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - Convex Polytope Trees [57.56078843831244]
convex polytope trees (CPT) are proposed to expand the family of decision trees by an interpretable generalization of their decision boundary.
We develop a greedy method to efficiently construct CPT and scalable end-to-end training algorithms for the tree parameters when the tree structure is given.
arXiv Detail & Related papers (2020-10-21T19:38:57Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.