Measure Inducing Classification and Regression Trees for Functional Data
- URL: http://arxiv.org/abs/2011.00046v1
- Date: Fri, 30 Oct 2020 18:49:53 GMT
- Title: Measure Inducing Classification and Regression Trees for Functional Data
- Authors: Edoardo Belli, Simone Vantini
- Abstract summary: We propose a tree-based algorithm for classification and regression problems in the context of functional data analysis.
This is achieved by learning a weighted functional $L2$ space by means of constrained convex optimization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a tree-based algorithm for classification and regression problems
in the context of functional data analysis, which allows to leverage
representation learning and multiple splitting rules at the node level,
reducing generalization error while retaining the interpretability of a tree.
This is achieved by learning a weighted functional $L^{2}$ space by means of
constrained convex optimization, which is then used to extract multiple
weighted integral features from the input functions, in order to determine the
binary split for each internal node of the tree. The approach is designed to
manage multiple functional inputs and/or outputs, by defining suitable
splitting rules and loss functions that can depend on the specific problem and
can also be combined with scalar and categorical data, as the tree is grown
with the original greedy CART algorithm. We focus on the case of scalar-valued
functional inputs defined on unidimensional domains and illustrate the
effectiveness of our method in both classification and regression tasks,
through a simulation study and four real world applications.
Related papers
- Statistical Advantages of Oblique Randomized Decision Trees and Forests [0.0]
Generalization error and convergence rates are obtained for the flexible dimension reduction model class of ridge functions.
A lower bound on the risk of axis-aligned Mondrian trees is obtained proving that these estimators are suboptimal for these linear dimension reduction models.
arXiv Detail & Related papers (2024-07-02T17:35:22Z) - Interpetable Target-Feature Aggregation for Multi-Task Learning based on Bias-Variance Analysis [53.38518232934096]
Multi-task learning (MTL) is a powerful machine learning paradigm designed to leverage shared knowledge across tasks to improve generalization and performance.
We propose an MTL approach at the intersection between task clustering and feature transformation based on a two-phase iterative aggregation of targets and features.
In both phases, a key aspect is to preserve the interpretability of the reduced targets and features through the aggregation with the mean, which is motivated by applications to Earth science.
arXiv Detail & Related papers (2024-06-12T08:30:16Z) - Nonlinear Feature Aggregation: Two Algorithms driven by Theory [45.3190496371625]
Real-world machine learning applications are characterized by a huge number of features, leading to computational and memory issues.
We propose a dimensionality reduction algorithm (NonLinCFA) which aggregates non-linear transformations of features with a generic aggregation function.
We also test the algorithms on synthetic and real-world datasets, performing regression and classification tasks, showing competitive performances.
arXiv Detail & Related papers (2023-06-19T19:57:33Z) - On Single-Objective Sub-Graph-Based Mutation for Solving the
Bi-Objective Minimum Spanning Tree Problem [0.0]
We contribute to the efficient approximation of the $mathcalNP$-hard multi-objective minimum spanning tree problem (moMST) adopting evolutionary computation.
We design several highly biased sub-graph-based mutation operators founded on the gained insights.
Our results confirm that the sub-graph based operators beat baseline algorithms from the literature.
arXiv Detail & Related papers (2023-05-31T22:35:17Z) - Unboxing Tree Ensembles for interpretability: a hierarchical
visualization tool and a multivariate optimal re-built tree [0.34530027457862006]
We develop an interpretable representation of a tree-ensemble model that can provide valuable insights into its behavior.
The proposed model is effective in yielding a shallow interpretable tree approxing the tree-ensemble decision function.
arXiv Detail & Related papers (2023-02-15T10:43:31Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Generalization Properties of Decision Trees on Real-valued and
Categorical Features [2.370481325034443]
We revisit binary decision trees from the perspective of partitions of the data.
We consider three types of features: real-valued, categorical ordinal and categorical nominal, with different split rules for each.
arXiv Detail & Related papers (2022-10-18T21:50:24Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Deep Reinforcement Learning of Graph Matching [63.469961545293756]
Graph matching (GM) under node and pairwise constraints has been a building block in areas from optimization to computer vision.
We present a reinforcement learning solver for GM i.e. RGM that seeks the node correspondence between pairwise graphs.
Our method differs from the previous deep graph matching model in the sense that they are focused on the front-end feature extraction and affinity function learning.
arXiv Detail & Related papers (2020-12-16T13:48:48Z) - Out-of-distribution Generalization via Partial Feature Decorrelation [72.96261704851683]
We present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimize a feature decomposition network and the target image classification model.
The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.
arXiv Detail & Related papers (2020-07-30T05:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.