A journey in ESN and LSTM visualisations on a language task
- URL: http://arxiv.org/abs/2012.01748v2
- Date: Sun, 13 Dec 2020 16:06:12 GMT
- Title: A journey in ESN and LSTM visualisations on a language task
- Authors: Alexandre Variengien and Xavier Hinaut
- Abstract summary: We trained ESNs and LSTMs on a Cross-Situationnal Learning (CSL) task.
The results are of three kinds: performance comparison, internal dynamics analyses and visualization of latent space.
- Score: 77.34726150561087
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Echo States Networks (ESN) and Long-Short Term Memory networks (LSTM) are two
popular architectures of Recurrent Neural Networks (RNN) to solve machine
learning task involving sequential data. However, little have been done to
compare their performances and their internal mechanisms on a common task. In
this work, we trained ESNs and LSTMs on a Cross-Situationnal Learning (CSL)
task. This task aims at modelling how infants learn language: they create
associations between words and visual stimuli in order to extract meaning from
words and sentences. The results are of three kinds: performance comparison,
internal dynamics analyses and visualization of latent space. (1) We found that
both models were able to successfully learn the task: the LSTM reached the
lowest error for the basic corpus, but the ESN was quicker to train.
Furthermore, the ESN was able to outperform LSTMs on datasets more challenging
without any further tuning needed. (2) We also conducted an analysis of the
internal units activations of LSTMs and ESNs. Despite the deep differences
between both models (trained or fixed internal weights), we were able to
uncover similar inner mechanisms: both put emphasis on the units encoding
aspects of the sentence structure. (3) Moreover, we present Recurrent States
Space Visualisations (RSSviz), a method to visualize the structure of latent
state space of RNNs, based on dimension reduction (using UMAP). This technique
enables us to observe a fractal embedding of sequences in the LSTM. RSSviz is
also useful for the analysis of ESNs (i) to spot difficult examples and (ii) to
generate animated plots showing the evolution of activations across learning
stages. Finally, we explore qualitatively how the RSSviz could provide an
intuitive visualisation to understand the influence of hyperparameters on the
reservoir dynamics prior to ESN training.
Related papers
- Context Gating in Spiking Neural Networks: Achieving Lifelong Learning through Integration of Local and Global Plasticity [20.589970453110208]
Humans learn multiple tasks in succession with minimal mutual interference, through the context gating mechanism in the prefrontal cortex (PFC)
We propose SNN with context gating trained by the local plasticity rule (CG-SNN) for lifelong learning.
Experiments show that the proposed model is effective in maintaining the past learning experience and has better task-selectivity than other methods during lifelong learning.
arXiv Detail & Related papers (2024-06-04T01:35:35Z) - Disentangling Structured Components: Towards Adaptive, Interpretable and
Scalable Time Series Forecasting [52.47493322446537]
We develop a adaptive, interpretable and scalable forecasting framework, which seeks to individually model each component of the spatial-temporal patterns.
SCNN works with a pre-defined generative process of MTS, which arithmetically characterizes the latent structure of the spatial-temporal patterns.
Extensive experiments are conducted to demonstrate that SCNN can achieve superior performance over state-of-the-art models on three real-world datasets.
arXiv Detail & Related papers (2023-05-22T13:39:44Z) - Towards Energy-Efficient, Low-Latency and Accurate Spiking LSTMs [1.7969777786551424]
Spiking Neural Networks (SNNs) have emerged as an attractive-temporal computing paradigm vision for complex tasks.
We propose an optimized spiking long short-term memory networks (LSTM) training framework that involves a novel.
rev-to-SNN conversion framework, followed by SNN training.
We evaluate our framework on sequential learning tasks including temporal M, Google Speech Commands (GSC) datasets, and UCI Smartphone on different LSTM architectures.
arXiv Detail & Related papers (2022-10-23T04:10:27Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Sign Language Recognition via Skeleton-Aware Multi-Model Ensemble [71.97020373520922]
Sign language is commonly used by deaf or mute people to communicate.
We propose a novel Multi-modal Framework with a Global Ensemble Model (GEM) for isolated Sign Language Recognition ( SLR)
Our proposed SAM- SLR-v2 framework is exceedingly effective and achieves state-of-the-art performance with significant margins.
arXiv Detail & Related papers (2021-10-12T16:57:18Z) - Estimating Reproducible Functional Networks Associated with Task
Dynamics using Unsupervised LSTMs [4.697267141773321]
We propose a method for estimating more reproducible functional networks associated with task activity by using recurrent neural networks with long short term memory (LSTM)
The LSTM model is trained in an unsupervised manner to generate the functional magnetic resonance imaging (fMRI) time-series data in regions of interest.
We demonstrate that the functional networks learned by the LSTM model are more strongly associated with the task activity and dynamics compared to other approaches.
arXiv Detail & Related papers (2021-05-06T17:53:22Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Evaluating the Effectiveness of Efficient Neural Architecture Search for
Sentence-Pair Tasks [14.963150544536203]
Neural Architecture Search (NAS) methods have recently achieved competitive or state-of-the-art (SOTA) performance on variety of natural language processing and computer vision tasks.
In this work, we explore the applicability of a SOTA NAS algorithm, Efficient Neural Architecture Search (ENAS) to two sentence pair tasks.
arXiv Detail & Related papers (2020-10-08T20:26:34Z) - Object Tracking through Residual and Dense LSTMs [67.98948222599849]
Deep learning-based trackers based on LSTMs (Long Short-Term Memory) recurrent neural networks have emerged as a powerful alternative.
DenseLSTMs outperform Residual and regular LSTM, and offer a higher resilience to nuisances.
Our case study supports the adoption of residual-based RNNs for enhancing the robustness of other trackers.
arXiv Detail & Related papers (2020-06-22T08:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.