Semi-Periodic Activation for Time Series Classification
- URL: http://arxiv.org/abs/2412.09889v1
- Date: Fri, 13 Dec 2024 06:06:49 GMT
- Title: Semi-Periodic Activation for Time Series Classification
- Authors: José Gilberto Barbosa de Medeiros Júnior, Andre Guarnier de Mitri, Diego Furtado Silva,
- Abstract summary: The study comprehensively analyzes properties, such as bounded, monotonic, nonlinearity, and periodicity, for activation in time series neural networks.
We propose a new activation that maximizes the coverage of these properties, called LeakySineLU.
- Score: 1.6631602844999722
- License:
- Abstract: This paper investigates the lack of research on activation functions for neural network models in time series tasks. It highlights the need to identify essential properties of these activations to improve their effectiveness in specific domains. To this end, the study comprehensively analyzes properties, such as bounded, monotonic, nonlinearity, and periodicity, for activation in time series neural networks. We propose a new activation that maximizes the coverage of these properties, called LeakySineLU. We empirically evaluate the LeakySineLU against commonly used activations in the literature using 112 benchmark datasets for time series classification, obtaining the best average ranking in all comparative scenarios.
Related papers
- Not All Diffusion Model Activations Have Been Evaluated as Discriminative Features [115.33889811527533]
Diffusion models are initially designed for image generation.
Recent research shows that the internal signals within their backbones, named activations, can also serve as dense features for various discriminative tasks.
arXiv Detail & Related papers (2024-10-04T16:05:14Z) - Frequency and Generalisation of Periodic Activation Functions in Reinforcement Learning [9.6812227037557]
We show that periodic activations learn low frequency representations and as a result avoid overfitting to bootstrapped targets.
We also show that weight decay regularization is able to partially offset the overfitting of periodic activation functions.
arXiv Detail & Related papers (2024-07-09T11:07:41Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Improving Classification Neural Networks by using Absolute activation
function (MNIST/LeNET-5 example) [0.0]
It is shown that in deep networks Absolute activation does not cause vanishing and exploding gradients, and therefore Absolute activation can be used in both simple and deep neural networks.
It is shown that solving the MNIST problem with the LeNet-like architectures based on Absolute activation allows to significantly reduce the number of trained parameters in the neural network with improving the prediction accuracy.
arXiv Detail & Related papers (2023-04-23T22:17:58Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Stochastic Adaptive Activation Function [1.9199289015460212]
This study proposes a simple yet effective activation function that facilitates different thresholds and adaptive activations according to the positions of units and the contexts of inputs.
Experimental analysis demonstrates that our activation function can provide the benefits of more accurate prediction and earlier convergence in many deep learning applications.
arXiv Detail & Related papers (2022-10-21T01:57:25Z) - Meta-Learning for Koopman Spectral Analysis with Short Time-series [49.41640137945938]
Existing methods require long time-series for training neural networks.
We propose a meta-learning method for estimating embedding functions from unseen short time-series.
We experimentally demonstrate that the proposed method achieves better performance in terms of eigenvalue estimation and future prediction.
arXiv Detail & Related papers (2021-02-09T07:19:19Z) - Learning summary features of time series for likelihood free inference [93.08098361687722]
We present a data-driven strategy for automatically learning summary features from time series data.
Our results indicate that learning summary features from data can compete and even outperform LFI methods based on hand-crafted values.
arXiv Detail & Related papers (2020-12-04T19:21:37Z) - Learn to cycle: Time-consistent feature discovery for action recognition [83.43682368129072]
Generalizing over temporal variations is a prerequisite for effective action recognition in videos.
We introduce Squeeze Re Temporal Gates (SRTG), an approach that favors temporal activations with potential variations.
We show consistent improvement when using SRTPG blocks, with only a minimal increase in the number of GFLOs.
arXiv Detail & Related papers (2020-06-15T09:36:28Z) - Neural Networks Fail to Learn Periodic Functions and How to Fix It [6.230751621285322]
We prove and demonstrate experimentally that the standard activations functions, such as ReLU, tanh, sigmoid, fail to learn to extrapolate simple periodic functions.
We propose a new activation, $x + sin2(x)$, which achieves the desired periodic inductive bias to learn a periodic function.
Experimentally, we apply the proposed method to temperature and financial data prediction.
arXiv Detail & Related papers (2020-06-15T07:49:33Z) - A survey on modern trainable activation functions [0.0]
We propose a taxonomy of trainable activation functions and highlight common and distinctive proprieties of recent and past models.
We show that many of the proposed approaches are equivalent to adding neuron layers which use fixed (non-trainable) activation functions.
arXiv Detail & Related papers (2020-05-02T12:38:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.