Temporal Convolutional Attention-based Network For Sequence Modeling
- URL: http://arxiv.org/abs/2002.12530v3
- Date: Sat, 14 Oct 2023 03:48:04 GMT
- Title: Temporal Convolutional Attention-based Network For Sequence Modeling
- Authors: Hongyan Hao, Yan Wang, Siqiao Xue, Yudi Xia, Jian Zhao, Furao Shen
- Abstract summary: We propose an exploratory architecture referred to Temporal Convolutional Attention-based Network (TCAN)
TCAN combines temporal convolutional network and attention mechanism.
We improve the state-of-the-art results of bpc/perplexity to 30.28 on word-level PTB, 1.092 on character-level PTB, and 9.20 on WikiText-2.
- Score: 13.972755301732656
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the development of feed-forward models, the default model for sequence
modeling has gradually evolved to replace recurrent networks. Many powerful
feed-forward models based on convolutional networks and attention mechanism
were proposed and show more potential to handle sequence modeling tasks. We
wonder that is there an architecture that can not only achieve an approximate
substitution of recurrent network, but also absorb the advantages of
feed-forward models. So we propose an exploratory architecture referred to
Temporal Convolutional Attention-based Network (TCAN) which combines temporal
convolutional network and attention mechanism. TCAN includes two parts, one is
Temporal Attention (TA) which captures relevant features inside the sequence,
the other is Enhanced Residual (ER) which extracts shallow layer's important
information and transfers to deep layers. We improve the state-of-the-art
results of bpc/perplexity to 30.28 on word-level PTB, 1.092 on character-level
PTB, and 9.20 on WikiText-2.
Related papers
- Towards Efficient Deep Spiking Neural Networks Construction with Spiking Activity based Pruning [17.454100169491497]
We propose a structured pruning approach based on the activity levels of convolutional kernels named Spiking Channel Activity-based (SCA) network pruning framework.
Inspired by synaptic plasticity mechanisms, our method dynamically adjusts the network's structure by pruning and regenerating convolutional kernels during training, enhancing the model's adaptation to the current target task.
arXiv Detail & Related papers (2024-06-03T07:44:37Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling [4.190836962132713]
This paper introduces Orchid, a novel architecture designed to address the quadratic complexity of traditional attention mechanisms.
At the core of this architecture lies a new data-dependent global convolution layer, which contextually adapts its conditioned kernel on input sequence.
We evaluate the proposed model across multiple domains, including language modeling and image classification, to highlight its performance and generality.
arXiv Detail & Related papers (2024-02-28T17:36:45Z) - FormerTime: Hierarchical Multi-Scale Representations for Multivariate
Time Series Classification [53.55504611255664]
FormerTime is a hierarchical representation model for improving the classification capacity for the multivariate time series classification task.
It exhibits three aspects of merits: (1) learning hierarchical multi-scale representations from time series data, (2) inheriting the strength of both transformers and convolutional networks, and (3) tacking the efficiency challenges incurred by the self-attention mechanism.
arXiv Detail & Related papers (2023-02-20T07:46:14Z) - SpatioTemporal Focus for Skeleton-based Action Recognition [66.8571926307011]
Graph convolutional networks (GCNs) are widely adopted in skeleton-based action recognition.
We argue that the performance of recent proposed skeleton-based action recognition methods is limited by the following factors.
Inspired by the recent attention mechanism, we propose a multi-grain contextual focus module, termed MCF, to capture the action associated relation information.
arXiv Detail & Related papers (2022-03-31T02:45:24Z) - TMS: A Temporal Multi-scale Backbone Design for Speaker Embedding [60.292702363839716]
Current SOTA backbone networks for speaker embedding are designed to aggregate multi-scale features from an utterance with multi-branch network architectures for speaker representation.
We propose an effective temporal multi-scale (TMS) model where multi-scale branches could be efficiently designed in a speaker embedding network almost without increasing computational costs.
arXiv Detail & Related papers (2022-03-17T05:49:35Z) - Multi-Scale Semantics-Guided Neural Networks for Efficient
Skeleton-Based Human Action Recognition [140.18376685167857]
A simple yet effective multi-scale semantics-guided neural network is proposed for skeleton-based action recognition.
MS-SGN achieves the state-of-the-art performance on the NTU60, NTU120, and SYSU datasets.
arXiv Detail & Related papers (2021-11-07T03:50:50Z) - Deep Structural Point Process for Learning Temporal Interaction Networks [7.288706620982159]
A temporal interaction network consists of a series of chronological interactions between users and items.
Previous methods fail to consider the structural information of temporal interaction networks and inevitably lead to sub-optimal results.
We propose a novel Deep Structural Point Process termed as DSPP for learning temporal interaction networks.
arXiv Detail & Related papers (2021-07-08T03:07:34Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - TSAM: Temporal Link Prediction in Directed Networks based on
Self-Attention Mechanism [2.5144068869465994]
We propose a deep learning model based on graph neural networks (GCN) and self-attention mechanism, namely TSAM.
We run comparative experiments on four realistic networks to validate the effectiveness of TSAM.
arXiv Detail & Related papers (2020-08-23T11:56:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.