RTFN: A Robust Temporal Feature Network for Time Series Classification
- URL: http://arxiv.org/abs/2011.11829v2
- Date: Tue, 29 Dec 2020 02:23:58 GMT
- Title: RTFN: A Robust Temporal Feature Network for Time Series Classification
- Authors: Zhiwen Xiao, Xin Xu, Huanlai Xing, Shouxi Luo, Penglin Dai, Dawei Zhan
- Abstract summary: Time series data usually contains local and global patterns.
To obtain representations by a feature network is still challenging.
We propose a novel temporal feature network (TFN) and an LSTM-based attention network (LSTMaN)
- Score: 9.982074664830867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time series data usually contains local and global patterns. Most of the
existing feature networks pay more attention to local features rather than the
relationships among them. The latter is, however, also important yet more
difficult to explore. To obtain sufficient representations by a feature network
is still challenging. To this end, we propose a novel robust temporal feature
network (RTFN) for feature extraction in time series classification, containing
a temporal feature network (TFN) and an LSTM-based attention network (LSTMaN).
TFN is a residual structure with multiple convolutional layers. It functions as
a local-feature extraction network to mine sufficient local features from data.
LSTMaN is composed of two identical layers, where attention and long short-term
memory (LSTM) networks are hybridized. This network acts as a relation
extraction network to discover the intrinsic relationships among the extracted
features at different positions in sequential data. In experiments, we embed
RTFN into a supervised structure as a feature extractor and into an
unsupervised structure as an encoder, respectively. The results show that the
RTFN-based structures achieve excellent supervised and unsupervised performance
on a large number of UCR2018 and UEA2018 datasets.
Related papers
- Graph Expansion in Pruned Recurrent Neural Network Layers Preserve Performance [7.142235510048155]
We prune recurrent networks such as RNNs and LSTMs, maintaining a large spectral gap of the underlying graphs.
We also study the time unfolded recurrent network graphs in terms of the properties of their bipartite layers.
arXiv Detail & Related papers (2024-03-17T06:08:08Z) - FormerTime: Hierarchical Multi-Scale Representations for Multivariate
Time Series Classification [53.55504611255664]
FormerTime is a hierarchical representation model for improving the classification capacity for the multivariate time series classification task.
It exhibits three aspects of merits: (1) learning hierarchical multi-scale representations from time series data, (2) inheriting the strength of both transformers and convolutional networks, and (3) tacking the efficiency challenges incurred by the self-attention mechanism.
arXiv Detail & Related papers (2023-02-20T07:46:14Z) - A^2-FPN: Attention Aggregation based Feature Pyramid Network for
Instance Segmentation [68.10621089649486]
We propose Attention Aggregation based Feature Pyramid Network (A2-FPN) to improve multi-scale feature learning.
A2-FPN achieves an improvement of 2.0% and 1.4% mask AP when integrated into the strong baselines such as Cascade Mask R-CNN and Hybrid Task Cascade.
arXiv Detail & Related papers (2021-05-07T11:51:08Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Hybrid Backpropagation Parallel Reservoir Networks [8.944918753413827]
We propose a novel hybrid network, which combines the effectiveness of learning random temporal features of reservoirs with the readout power of a deep neural network with batch normalization.
We demonstrate that our new network outperforms LSTMs and GRUs, including multi-layer "deep" versions of these networks.
We show also that the inclusion of a novel meta-ring structure, which we call HBP-ESN M-Ring, achieves similar performance to one large reservoir while decreasing the memory required by an order of magnitude.
arXiv Detail & Related papers (2020-10-27T21:03:35Z) - RTFN: Robust Temporal Feature Network [10.304629792265223]
We propose a novel robust temporal feature network (RTFN) that contains temporal feature networks and attentional LSTM networks.
The basic temporal networks are built to capture complicated shapelets and relationships to enrich data.
In experiments, we embedN into supervised structure as a feature network and into unsupervised clustering as an encoder.
arXiv Detail & Related papers (2020-08-18T02:43:30Z) - Instance Explainable Temporal Network For Multivariate Timeseries [0.0]
We propose a novel network (IETNet) that identifies the important channels in the classification decision for each instance of inference.
IETNet is an end-to-end network that combines temporal feature extraction, variable selection, and joint variable interaction into a single learning framework.
arXiv Detail & Related papers (2020-05-26T20:55:24Z) - Temporal Pyramid Network for Action Recognition [129.12076009042622]
We propose a generic Temporal Pyramid Network (TPN) at the feature-level, which can be flexibly integrated into 2D or 3D backbone networks.
TPN shows consistent improvements over other challenging baselines on several action recognition datasets.
arXiv Detail & Related papers (2020-04-07T17:17:23Z) - Temporally Distributed Networks for Fast Video Semantic Segmentation [64.5330491940425]
TDNet is a temporally distributed network designed for fast and accurate video semantic segmentation.
We observe that features extracted from a certain high-level layer of a deep CNN can be approximated by composing features extracted from several shallower sub-networks.
Experiments on Cityscapes, CamVid, and NYUD-v2 demonstrate that our method achieves state-of-the-art accuracy with significantly faster speed and lower latency.
arXiv Detail & Related papers (2020-04-03T22:43:32Z) - Depth Enables Long-Term Memory for Recurrent Neural Networks [0.0]
We introduce a measure of the network's ability to support information flow across time, referred to as the Start-End separation rank.
We prove that deep recurrent networks support Start-End separation ranks which are higher than those supported by their shallow counterparts.
arXiv Detail & Related papers (2020-03-23T10:29:14Z) - Dense Residual Network: Enhancing Global Dense Feature Flow for
Character Recognition [75.4027660840568]
This paper explores how to enhance the local and global dense feature flow by exploiting hierarchical features fully from all the convolution layers.
Technically, we propose an efficient and effective CNN framework, i.e., Fast Dense Residual Network (FDRN) for text recognition.
arXiv Detail & Related papers (2020-01-23T06:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.