HIT-ROCKET: Hadamard-vector Inner-product Transformer for ROCKET
- URL: http://arxiv.org/abs/2511.01572v1
- Date: Mon, 03 Nov 2025 13:39:40 GMT
- Title: HIT-ROCKET: Hadamard-vector Inner-product Transformer for ROCKET
- Authors: Wang Hao, Kuang Zhang, Hou Chengyu, Yuan Zhonghao, Tan Chenxing, Fu Weifeng, Zhu Yangying,
- Abstract summary: Time series classification holds broad application value in communications, information countermeasures, finance, and medicine.<n>State-of-the-art (SOTA) methods exhibit high computational complexity, coupled with lengthy parameter tuning and training cycles.<n>We propose a feature extraction approach based on Hadamard convolutional transform, utilizing column or row vectors of Hadamard matrices as convolution kernels with extended lengths of varying sizes.
- Score: 0.039089069256361735
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Time series classification holds broad application value in communications, information countermeasures, finance, and medicine. However, state-of-the-art (SOTA) methods-including HIVE-COTE, Proximity Forest, and TS-CHIEF-exhibit high computational complexity, coupled with lengthy parameter tuning and training cycles. In contrast, lightweight solutions like ROCKET (Random Convolutional Kernel Transform) offer greater efficiency but leave substantial room for improvement in kernel selection and computational overhead. To address these challenges, we propose a feature extraction approach based on Hadamard convolutional transform, utilizing column or row vectors of Hadamard matrices as convolution kernels with extended lengths of varying sizes. This enhancement maintains full compatibility with existing methods (e.g., ROCKET) while leveraging kernel orthogonality to boost computational efficiency, robustness, and adaptability. Comprehensive experiments on multi-domain datasets-focusing on the UCR time series dataset-demonstrate SOTA performance: F1-score improved by at least 5% vs. ROCKET, with 50% shorter training time than miniROCKET (fastest ROCKET variant) under identical hyperparameters, enabling deployment on ultra-low-power embedded devices. All code is available on GitHub.
Related papers
- CUROCKET: Optimizing ROCKET for GPU [0.0]
ROCKET (RandOm Convolutional KErnel Transform) is a feature extraction algorithm created for Time Series Classification (TSC)<n>We propose an algorithm that is able to efficiently perform ROCKET on GPU and achieves up to 11 times higher computational efficiency per watt than ROCKET on CPU.
arXiv Detail & Related papers (2026-01-23T11:21:52Z) - SPROCKET: Extending ROCKET to Distance-Based Time-Series Transformations With Prototypes [0.0]
SPROCKET implements a new feature engineering strategy based on prototypes.<n>On a majority of the UCR and UEA Time Series Classification archives, SPROCKET achieves performance comparable to existing convolutional algorithms.
arXiv Detail & Related papers (2025-12-09T05:00:01Z) - Singular Value Decomposition on Kronecker Adaptation for Large Language Model [0.8747606955991707]
Large pre-trained Transformer models achieve state-of-the-art results across diverse language and reasoning tasks.<n>Full fine-tuning incurs substantial storage, memory, and computational overhead.<n>We propose SoKA, a novel PEFT strategy that combines Kronecker-product tensor factorization with SVD-driven initialization and dynamic rank selection.
arXiv Detail & Related papers (2025-06-18T08:28:53Z) - GMR-Conv: An Efficient Rotation and Reflection Equivariant Convolution Kernel Using Gaussian Mixture Rings [4.365790707793824]
We introduce an efficient convolution kernel that smooths radial symmetry using a mixture of Gaussian-weighted rings.<n>Experiments on eight classification and one segmentation datasets demonstrate that GMR-Conv not only matches conventional CNNs' performance but can also surpass it in applications with orientation-less data.
arXiv Detail & Related papers (2025-04-03T17:58:18Z) - Sliding Window Attention Training for Efficient Large Language Models [55.56483740523027]
We introduce SWAT, which enables efficient long-context handling via Sliding Window Attention Training.<n>This paper first attributes the inefficiency of Transformers to the attention sink phenomenon.<n>We replace softmax with the sigmoid function and utilize a balanced ALiBi and Rotary Position Embedding for efficient information compression and retention.
arXiv Detail & Related papers (2025-02-26T05:31:44Z) - Transforming Image Super-Resolution: A ConvFormer-based Efficient Approach [58.57026686186709]
We introduce the Convolutional Transformer layer (ConvFormer) and propose a ConvFormer-based Super-Resolution network (CFSR)
CFSR inherits the advantages of both convolution-based and transformer-based approaches.
Experiments demonstrate that CFSR strikes an optimal balance between computational cost and performance.
arXiv Detail & Related papers (2024-01-11T03:08:00Z) - Detach-ROCKET: Sequential feature selection for time series classification with random convolutional kernels [0.7499722271664144]
We introduce Sequential Feature Detachment (SFD) to identify and prune non-essential features in ROCKET-based models.
SFD can produce models with better test accuracy using only 10% of the original features.
We also present an end-to-end procedure for determining an optimal balance between the number of features and model accuracy.
arXiv Detail & Related papers (2023-09-25T20:24:36Z) - POCKET: Pruning Random Convolution Kernels for Time Series Classification from a Feature Selection Perspective [8.359327841946852]
A time series classification model, POCKET, is designed to efficiently prune redundant kernels.
POCKET prunes up to 60% of kernels without a significant reduction in accuracy and performs 11$times$ faster than its counterparts.
Experimental results on diverse time series datasets show that POCKET prunes up to 60% of kernels without a significant reduction in accuracy and performs 11$times$ faster than its counterparts.
arXiv Detail & Related papers (2023-09-15T16:03:23Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Stable, Fast and Accurate: Kernelized Attention with Relative Positional
Encoding [63.539333383965726]
We propose a novel way to accelerate attention calculation for Transformers with relative positional encoding (RPE)
Based upon the observation that relative positional encoding forms a Toeplitz matrix, we mathematically show that kernelized attention with RPE can be calculated efficiently using Fast Fourier Transform (FFT)
arXiv Detail & Related papers (2021-06-23T17:51:26Z) - VSAC: Efficient and Accurate Estimator for H and F [68.65610177368617]
VSAC is a RANSAC-type robust estimator with a number of novelties.
It is significantly faster than all its predecessors and runs on average in 1-2 ms, on a CPU.
It is two orders of magnitude faster and yet as precise as MAGSAC++, the currently most accurate estimator of two-view geometry.
arXiv Detail & Related papers (2021-06-18T17:04:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.