Benchmarking optimality of time series classification methods in
distinguishing diffusions
- URL: http://arxiv.org/abs/2301.13112v3
- Date: Wed, 12 Apr 2023 03:49:48 GMT
- Title: Benchmarking optimality of time series classification methods in
distinguishing diffusions
- Authors: Zehong Zhang, Fei Lu, Esther Xu Fei, Terry Lyons, Yannis Kevrekidis,
and Tom Woolf
- Abstract summary: This study proposes to benchmark the optimality of TSC algorithms in distinguishing diffusion processes by the likelihood ratio test (LRT)
The LRT benchmarks are computationally efficient because the LRT does not need training, and the diffusion processes can be efficiently simulated and are flexible to reflect the specific features of real-world applications.
- Score: 1.0775419935941009
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Statistical optimality benchmarking is crucial for analyzing and designing
time series classification (TSC) algorithms. This study proposes to benchmark
the optimality of TSC algorithms in distinguishing diffusion processes by the
likelihood ratio test (LRT). The LRT is an optimal classifier by the
Neyman-Pearson lemma. The LRT benchmarks are computationally efficient because
the LRT does not need training, and the diffusion processes can be efficiently
simulated and are flexible to reflect the specific features of real-world
applications. We demonstrate the benchmarking with three widely-used TSC
algorithms: random forest, ResNet, and ROCKET. These algorithms can achieve the
LRT optimality for univariate time series and multivariate Gaussian processes.
However, these model-agnostic algorithms are suboptimal in classifying
high-dimensional nonlinear multivariate time series. Additionally, the LRT
benchmark provides tools to analyze the dependence of classification accuracy
on the time length, dimension, temporal sampling frequency, and randomness of
the time series.
Related papers
- Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Best-Subset Selection in Generalized Linear Models: A Fast and
Consistent Algorithm via Splicing Technique [0.6338047104436422]
Best subset section has been widely regarded as the Holy Grail of problems of this type.
We proposed and illustrated an algorithm for best subset recovery in mild conditions.
Our implementation achieves approximately a fourfold speedup compared to popular variable selection toolkits.
arXiv Detail & Related papers (2023-08-01T03:11:31Z) - Generative modeling of time-dependent densities via optimal transport
and projection pursuit [3.069335774032178]
We propose a cheap alternative to popular deep learning algorithms for temporal modeling.
Our method is highly competitive compared with state-of-the-art solvers.
arXiv Detail & Related papers (2023-04-19T13:50:13Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Training Robust Deep Models for Time-Series Domain: Novel Algorithms and
Theoretical Analysis [32.45387153404849]
We propose a novel framework referred as RObust Training for Time-Series (RO-TS) to create robust DNNs for time-series classification tasks.
We show the generality and advantages of our formulation using the summation structure over time-series alignments.
Our experiments on real-world benchmarks demonstrate that RO-TS creates more robust DNNs when compared to adversarial training.
arXiv Detail & Related papers (2022-07-09T17:21:03Z) - Online hyperparameter optimization by real-time recurrent learning [57.01871583756586]
Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in neural networks (RNNs)
It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously.
This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
arXiv Detail & Related papers (2021-02-15T19:36:18Z) - STaRFlow: A SpatioTemporal Recurrent Cell for Lightweight Multi-Frame
Optical Flow Estimation [64.99259320624148]
We present a new lightweight CNN-based algorithm for multi-frame optical flow estimation.
The resulting STaRFlow algorithm gives state-of-the-art performances on MPI Sintel and Kitti2015.
arXiv Detail & Related papers (2020-07-10T17:01:34Z) - Stochastic batch size for adaptive regularization in deep network
optimization [63.68104397173262]
We propose a first-order optimization algorithm incorporating adaptive regularization applicable to machine learning problems in deep learning framework.
We empirically demonstrate the effectiveness of our algorithm using an image classification task based on conventional network models applied to commonly used benchmark datasets.
arXiv Detail & Related papers (2020-04-14T07:54:53Z) - Analysis of the Performance of Algorithm Configurators for Search
Heuristics with Global Mutation Operators [0.0]
ParamRLS can efficiently identify the optimal neighbourhood size to be used by local search.
We show that the simple ParamRLS-F can identify the optimal mutation rates even when using cutoff times that are considerably smaller than the expected optimisation time of the best parameter value for both problem classes.
arXiv Detail & Related papers (2020-04-09T12:42:30Z) - Robust Learning Rate Selection for Stochastic Optimization via Splitting
Diagnostic [5.395127324484869]
SplitSGD is a new dynamic learning schedule for optimization.
The method decreases the learning rate for better adaptation to the local geometry of the objective function.
It essentially does not incur additional computational cost than standard SGD.
arXiv Detail & Related papers (2019-10-18T19:38:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.