Random Convolution Kernels with Multi-Scale Decomposition for Preterm
EEG Inter-burst Detection
- URL: http://arxiv.org/abs/2108.02039v1
- Date: Wed, 4 Aug 2021 13:07:41 GMT
- Title: Random Convolution Kernels with Multi-Scale Decomposition for Preterm
EEG Inter-burst Detection
- Authors: Christopher Lundy (1 and 2) and John M. O'Toole (1 and 2) ((1) Irish
Centre for Maternal and Child Health Research (INFANT), University College
Cork, Ireland, (2) Department of Paediatrics and Child Health, University
College Cork, Ireland)
- Abstract summary: Linear classifiers with random convolution kernels are computationally efficient methods that need no design or domain knowledge.
A recently proposed method, RandOm Convolutional KErnel Transforms, has shown high accuracy across a range of time-series data sets.
We propose a multi-scale version of this method, using both high- and low-frequency components.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear classifiers with random convolution kernels are computationally
efficient methods that need no design or domain knowledge. Unlike deep neural
networks, there is no need to hand-craft a network architecture; the kernels
are randomly generated and only the linear classifier needs training. A
recently proposed method, RandOm Convolutional KErnel Transforms (ROCKETs), has
shown high accuracy across a range of time-series data sets. Here we propose a
multi-scale version of this method, using both high- and low-frequency
components. We apply our methods to inter-burst detection in a cohort of
preterm EEG recorded from 36 neonates <30 weeks gestational age. Two features
from the convolution of 10,000 random kernels are combined using ridge
regression. The proposed multi-scale ROCKET method out-performs the method
without scale: median (interquartile range, IQR) Matthews correlation
coefficient (MCC) of 0.859 (0.815 to 0.874) for multi-scale versus 0.841 (0.807
to 0.865) without scale, p<0.001. The proposed method lags behind an existing
feature-based machine learning method developed with deep domain knowledge, but
is fast to train and can quickly set an initial baseline threshold of
performance for generic and biomedical time-series classification.
Related papers
- Convolutional Deep Kernel Machines [25.958907308877148]
Recent work modified the Neural Network Gaussian Process (NNGP) limit of Bayesian neural networks so that representation learning is retained.
Applying this modified limit to a deep Gaussian process gives a practical learning algorithm which they dubbed the deep kernel machine (DKM)
arXiv Detail & Related papers (2023-09-18T14:36:17Z) - POCKET: Pruning Random Convolution Kernels for Time Series Classification from a Feature Selection Perspective [8.359327841946852]
A time series classification model, POCKET, is designed to efficiently prune redundant kernels.
POCKET prunes up to 60% of kernels without a significant reduction in accuracy and performs 11$times$ faster than its counterparts.
Experimental results on diverse time series datasets show that POCKET prunes up to 60% of kernels without a significant reduction in accuracy and performs 11$times$ faster than its counterparts.
arXiv Detail & Related papers (2023-09-15T16:03:23Z) - Scaling Up 3D Kernels with Bayesian Frequency Re-parameterization for
Medical Image Segmentation [25.62587471067468]
RepUX-Net is a pure CNN architecture with a simple large kernel block design.
Inspired by the spatial frequency in the human visual system, we extend to vary the kernel convergence into element-wise setting.
arXiv Detail & Related papers (2023-03-10T08:38:34Z) - A Simple Algorithm For Scaling Up Kernel Methods [0.0]
We introduce a novel random feature regression algorithm that allows us to scale to virtually infinite numbers of random features.
We illustrate the performance of our method on the CIFAR-10 dataset.
arXiv Detail & Related papers (2023-01-26T20:59:28Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Efficient Approximate Kernel Based Spike Sequence Classification [56.2938724367661]
Machine learning models, such as SVM, require a definition of distance/similarity between pairs of sequences.
Exact methods yield better classification performance, but they pose high computational costs.
We propose a series of ways to improve the performance of the approximate kernel in order to enhance its predictive performance.
arXiv Detail & Related papers (2022-09-11T22:44:19Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Accurate Tumor Tissue Region Detection with Accelerated Deep
Convolutional Neural Networks [12.7414209590152]
Manual annotation of pathology slides for cancer diagnosis is laborious and repetitive.
Our approach, (FLASH) is based on a Deep Convolutional Neural Network (DCNN) architecture.
It reduces computational costs and is faster than typical deep learning approaches by two orders of magnitude.
arXiv Detail & Related papers (2020-04-18T08:24:27Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.