Classification of Time-Series Data Using Boosted Decision Trees
- URL: http://arxiv.org/abs/2110.00581v1
- Date: Fri, 1 Oct 2021 15:28:26 GMT
- Title: Classification of Time-Series Data Using Boosted Decision Trees
- Authors: Erfan Aasi, Cristian Ioan Vasile, Mahroo Bahreinian, Calin Belta
- Abstract summary: Time-series data classification is central to the analysis and control of autonomous systems, such as robots and self-driving cars.
Current frameworks are either inaccurate for real-world applications, such as autonomous driving, or they generate long and complicated formulae that lack interpretability.
We introduce a novel learning method, called Boosted Concise Decision Trees (BCDTs), to generate binary classifiers that are represented as Signal Temporal Logic (STL) formulae.
- Score: 3.4606842570088094
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Time-series data classification is central to the analysis and control of
autonomous systems, such as robots and self-driving cars. Temporal logic-based
learning algorithms have been proposed recently as classifiers of such data.
However, current frameworks are either inaccurate for real-world applications,
such as autonomous driving, or they generate long and complicated formulae that
lack interpretability. To address these limitations, we introduce a novel
learning method, called Boosted Concise Decision Trees (BCDTs), to generate
binary classifiers that are represented as Signal Temporal Logic (STL)
formulae. Our algorithm leverages an ensemble of Concise Decision Trees (CDTs)
to improve the classification performance, where each CDT is a decision tree
that is empowered by a set of techniques to generate simpler formulae and
improve interpretability. The effectiveness and classification performance of
our algorithm are evaluated on naval surveillance and urban-driving case
studies.
Related papers
- Learning Optimal Signal Temporal Logic Decision Trees for Classification: A Max-Flow MILP Formulation [5.924780594614676]
This paper presents a novel framework for inferring timed temporal logic properties from data.
We formulate the inference process as a mixed integer linear programming optimization problem.
Applying a max-flow algorithm on the resultant tree transforms the problem into a global optimization challenge.
We conduct three case studies involving two-class, multi-class, and complex formula classification scenarios.
arXiv Detail & Related papers (2024-07-30T16:56:21Z) - Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - Learning Signal Temporal Logic through Neural Network for Interpretable
Classification [13.829082181692872]
We propose an explainable neural-symbolic framework for the classification of time-series behaviors.
We demonstrate the computational efficiency, compactness, and interpretability of the proposed method through driving scenarios and naval surveillance case studies.
arXiv Detail & Related papers (2022-10-04T21:11:54Z) - Optimal randomized classification trees [0.0]
Classification and Regression Trees (CARTs) are off-the-shelf techniques in modern Statistics and Machine Learning.
CARTs are built by means of a greedy procedure, sequentially deciding the splitting predictor variable(s) and the associated threshold.
This greedy approach trains trees very fast, but, by its nature, their classification accuracy may not be competitive against other state-of-the-art procedures.
arXiv Detail & Related papers (2021-10-19T11:41:12Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Inferring Temporal Logic Properties from Data using Boosted Decision
Trees [3.4606842570088094]
This paper is a first step towards interpretable learning-based robot control.
We introduce a novel learning problem, called incremental formula and predictor learning.
We propose a boosted decision-tree algorithm that leverages weak, but computationally inexpensive, learners to increase prediction and performance.
arXiv Detail & Related papers (2021-05-24T19:29:02Z) - An Extensive Experimental Evaluation of Automated Machine Learning
Methods for Recommending Classification Algorithms (Extended Version) [4.400989370979334]
Three of these methods are based on Evolutionary Algorithms (EAs), and the other is Auto-WEKA, a well-known AutoML method.
We performed controlled experiments where these four AutoML methods were given the same runtime limit for different values of this limit.
In general, the difference in predictive accuracy of the three best AutoML methods was not statistically significant.
arXiv Detail & Related papers (2020-09-16T02:36:43Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.