Early Classification of Time Series. Cost-based Optimization Criterion
and Algorithms
- URL: http://arxiv.org/abs/2005.09945v2
- Date: Wed, 24 Mar 2021 13:29:35 GMT
- Title: Early Classification of Time Series. Cost-based Optimization Criterion
and Algorithms
- Authors: Youssef Achenchabe, Alexis Bondu, Antoine Cornu\'ejols and Asma
Dachraoui
- Abstract summary: In this paper, we put forward a new optimization criterion which takes into account both the cost of misclassification and the cost of delaying the decision.
We derived a family of non-myopic algorithms which try to anticipate the expected future gain in information in balance with the cost of waiting.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An increasing number of applications require to recognize the class of an
incoming time series as quickly as possible without unduly compromising the
accuracy of the prediction. In this paper, we put forward a new optimization
criterion which takes into account both the cost of misclassification and the
cost of delaying the decision. Based on this optimization criterion, we derived
a family of non-myopic algorithms which try to anticipate the expected future
gain in information in balance with the cost of waiting. In one class of
algorithms, unsupervised-based, the expectations use the clustering of time
series, while in a second class, supervised-based, time series are grouped
according to the confidence level of the classifier used to label them.
Extensive experiments carried out on real data sets using a large range of
delay cost functions show that the presented algorithms are able to
satisfactorily solving the earliness vs. accuracy trade-off, with the
supervised-based approaches faring better than the unsupervised-based ones. In
addition, all these methods perform better in a wide variety of conditions than
a state of the art method based on a myopic strategy which is recognized as
very competitive.
Related papers
- Utilizing Data Fingerprints for Privacy-Preserving Algorithm Selection in Time Series Classification: Performance and Uncertainty Estimation on Unseen Datasets [4.2193475197905705]
We introduce a novel data fingerprint that describes any time series classification dataset in a privacy-preserving manner.
By decomposing the multi-target regression problem, only our data fingerprints are used to estimate algorithm performance and uncertainty.
Our approach is evaluated on the 112 University of California riverside benchmark datasets.
arXiv Detail & Related papers (2024-09-13T08:43:42Z) - Loss Shaping Constraints for Long-Term Time Series Forecasting [79.3533114027664]
We present a Constrained Learning approach for long-term time series forecasting that respects a user-defined upper bound on the loss at each time-step.
We propose a practical Primal-Dual algorithm to tackle it, and aims to demonstrate that it exhibits competitive average performance in time series benchmarks, while shaping the errors across the predicted window.
arXiv Detail & Related papers (2024-02-14T18:20:44Z) - Large-scale Fully-Unsupervised Re-Identification [78.47108158030213]
We propose two strategies to learn from large-scale unlabeled data.
The first strategy performs a local neighborhood sampling to reduce the dataset size in each without violating neighborhood relationships.
A second strategy leverages a novel Re-Ranking technique, which has a lower time upper bound complexity and reduces the memory complexity from O(n2) to O(kn) with k n.
arXiv Detail & Related papers (2023-07-26T16:19:19Z) - Budgeted Classification with Rejection: An Evolutionary Method with
Multiple Objectives [0.0]
Budgeted, sequential classifiers (BSCs) process inputs through a sequence of partial feature acquisition and evaluation steps.
This allows for an efficient evaluation of inputs that prevents unneeded feature acquisition.
We propose a problem-specific genetic algorithm to build budgeted, sequential classifiers with confidence-based reject options.
arXiv Detail & Related papers (2022-05-01T22:05:16Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - Evolutionary Optimization of High-Coverage Budgeted Classifiers [1.7767466724342065]
Budgeted multi-feature classifiers (MSC) process inputs through a sequence of partial feature acquisition and evaluation steps.
This paper proposes a problem-specific MSC that incorporates a terminal reject option for indecisive predictions.
The algorithm's design emphasizes efficiency while respecting a notion of aggregated performance via a uniqueization.
arXiv Detail & Related papers (2021-10-25T16:03:07Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Metalearning Linear Bandits by Prior Update [7.519872646378836]
Fully Bayesian approaches assume that problem parameters are generated from a known prior, while in practice, such information is often lacking.
This problem is exacerbated in decision-making setups with partial information, where using a misspecified prior may lead to poor exploration and inferior performance.
In this work we prove, in the context of linear bandits and Gaussian priors, that as long as the prior estimate is sufficiently close to the true prior, the performance of an algorithm that uses the misspecified prior is close to that of an algorithm that uses the true prior.
arXiv Detail & Related papers (2021-07-12T11:17:01Z) - Integrated Optimization of Predictive and Prescriptive Tasks [0.0]
We propose a new framework directly integrating predictive tasks under prescriptive tasks.
We train the parameters of predictive algorithm within a prescription problem via bilevel optimization techniques.
arXiv Detail & Related papers (2021-01-02T02:43:10Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.