HDC-MiniROCKET: Explicit Time Encoding in Time Series Classification
with Hyperdimensional Computing
- URL: http://arxiv.org/abs/2202.08055v1
- Date: Wed, 16 Feb 2022 13:33:13 GMT
- Title: HDC-MiniROCKET: Explicit Time Encoding in Time Series Classification
with Hyperdimensional Computing
- Authors: Kenny Schlegel, Peer Neubert, Peter Protzel
- Abstract summary: MiniROCKET is one of the best existing methods for time series classification.
We extend this approach to provide better global temporal encodings using hyperdimensional computing (HDC) mechanisms.
The extension with HDC can achieve considerably better results on datasets with high temporal dependence without increasing the computational effort for inference.
- Score: 14.82489178857542
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classification of time series data is an important task for many application
domains. One of the best existing methods for this task, in terms of accuracy
and computation time, is MiniROCKET. In this work, we extend this approach to
provide better global temporal encodings using hyperdimensional computing (HDC)
mechanisms. HDC (also known as Vector Symbolic Architectures, VSA) is a general
method to explicitly represent and process information in high-dimensional
vectors. It has previously been used successfully in combination with deep
neural networks and other signal processing algorithms. We argue that the
internal high-dimensional representation of MiniROCKET is well suited to be
complemented by the algebra of HDC. This leads to a more general formulation,
HDC-MiniROCKET, where the original algorithm is only a special case. We will
discuss and demonstrate that HDC-MiniROCKET can systematically overcome
catastrophic failures of MiniROCKET on simple synthetic datasets. These results
are confirmed by experiments on the 128 datasets from the UCR time series
classification benchmark. The extension with HDC can achieve considerably
better results on datasets with high temporal dependence without increasing the
computational effort for inference.
Related papers
- Classification of Raw MEG/EEG Data with Detach-Rocket Ensemble: An Improved ROCKET Algorithm for Multivariate Time Series Analysis [0.0]
We present a novel ROCKET-based algorithm, named Detach-Rocket Ensemble, specifically designed to deal with high-dimensional data such as EEG and MEG.
Our algorithm leverages pruning to provide an integrated estimation of channel importance, and ensembles to achieve better accuracy and provide a label probability.
We show that Detach-Rocket Ensemble is able to provide both interpretable channel relevance and competitive classification accuracy, even when applied directly to the raw brain data.
arXiv Detail & Related papers (2024-08-05T18:24:09Z) - Holographic Global Convolutional Networks for Long-Range Prediction Tasks in Malware Detection [50.7263393517558]
We introduce Holographic Global Convolutional Networks (HGConv) that utilize the properties of Holographic Reduced Representations (HRR)
Unlike other global convolutional methods, our method does not require any intricate kernel computation or crafted kernel design.
The proposed method has achieved new SOTA results on Microsoft Malware Classification Challenge, Drebin, and EMBER malware benchmarks.
arXiv Detail & Related papers (2024-03-23T15:49:13Z) - CORE: Common Random Reconstruction for Distributed Optimization with
Provable Low Communication Complexity [110.50364486645852]
Communication complexity has become a major bottleneck for speeding up training and scaling up machine numbers.
We propose Common Om REOm, which can be used to compress information transmitted between machines.
arXiv Detail & Related papers (2023-09-23T08:45:27Z) - Streaming Encoding Algorithms for Scalable Hyperdimensional Computing [12.829102171258882]
Hyperdimensional computing (HDC) is a paradigm for data representation and learning originating in computational neuroscience.
In this work, we explore a family of streaming encoding techniques based on hashing.
We show formally that these methods enjoy comparable guarantees on performance for learning applications while being substantially more efficient than existing alternatives.
arXiv Detail & Related papers (2022-09-20T17:25:14Z) - HDTorch: Accelerating Hyperdimensional Computing with GP-GPUs for Design
Space Exploration [4.783565770657063]
We introduce HDTorch, an open-source, PyTorch-based HDC library with extensions for hypervector operations.
We analyze four HDC benchmark datasets in terms of accuracy, runtime, and memory consumption.
We perform the first-ever HD training and inference analysis of the entirety of the CHB-MIT EEG epilepsy database.
arXiv Detail & Related papers (2022-06-09T19:46:08Z) - An Extension to Basis-Hypervectors for Learning from Circular Data in
Hyperdimensional Computing [62.997667081978825]
Hyperdimensional Computing (HDC) is a computation framework based on properties of high-dimensional random spaces.
We present a study on basis-hypervector sets, which leads to practical contributions to HDC in general.
We introduce a method to learn from circular data, an important type of information never before addressed in machine learning with HDC.
arXiv Detail & Related papers (2022-05-16T18:04:55Z) - SreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm [60.61943386819384]
Existing implementations of KRR require that all the data is stored in the main memory.
We propose StreaMRAK - a streaming version of KRR.
We present a showcase study on two synthetic problems and the prediction of the trajectory of a double pendulum.
arXiv Detail & Related papers (2021-08-23T21:03:09Z) - Providing Meaningful Data Summarizations Using Examplar-based Clustering
in Industry 4.0 [67.80123919697971]
We show, that our GPU implementation provides speedups of up to 72x using single-precision and up to 452x using half-precision compared to conventional CPU algorithms.
We apply our algorithm to real-world data from injection molding manufacturing processes and discuss how found summaries help with steering this specific process to cut costs and reduce the manufacturing of bad parts.
arXiv Detail & Related papers (2021-05-25T15:55:14Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - MINIROCKET: A Very Fast (Almost) Deterministic Transform for Time Series
Classification [5.519586522442065]
ROCKET achieves state-of-the-art accuracy with a fraction of the computational expense of most existing methods.
We reformulate ROCKET into a new method, MINIROCKET, making it up to 75 times faster on larger datasets.
It is possible to train and test a classifier on all of 109 datasets from the UCR archive to state-of-the-art accuracy in less than 10 minutes.
arXiv Detail & Related papers (2020-12-16T08:24:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.