exploRNN: Understanding Recurrent Neural Networks through Visual
Exploration
- URL: http://arxiv.org/abs/2012.06326v1
- Date: Wed, 9 Dec 2020 15:06:01 GMT
- Title: exploRNN: Understanding Recurrent Neural Networks through Visual
Exploration
- Authors: Alex B\"auerle, Raphael St\"ork, and Timo Ropinski
- Abstract summary: recurrent neural networks (RNNs) are capable of processing sequential data.
We propose exploRNN, the first interactively explorable educational visualization for RNNs.
We provide an overview of the training process of RNNs at a coarse level, while also allowing detailed inspection of the data-flow within LSTM cells.
- Score: 6.006493809079212
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the success of deep learning and its growing job market, students and
researchers from many areas are getting interested in learning about deep
learning technologies. Visualization has proven to be of great help during this
learning process, while most current educational visualizations are targeted
towards one specific architecture or use case. Unfortunately, recurrent neural
networks (RNNs), which are capable of processing sequential data, are not
covered yet, despite the fact that tasks on sequential data, such as text and
function analysis, are at the forefront of deep learning research. Therefore,
we propose exploRNN, the first interactively explorable, educational
visualization for RNNs. exploRNN allows for interactive experimentation with
RNNs, and provides in-depth information on their functionality and behavior
during training. By defining educational objectives targeted towards
understanding RNNs, and using these as guidelines throughout the visual design
process, we have designed exploRNN to communicate the most important concepts
of RNNs directly within a web browser. By means of exploRNN, we provide an
overview of the training process of RNNs at a coarse level, while also allowing
detailed inspection of the data-flow within LSTM cells. Within this paper, we
motivate our design of exploRNN, detail its realization, and discuss the
results of a user study investigating the benefits of exploRNN.
Related papers
- Investigating Sparsity in Recurrent Neural Networks [0.0]
This thesis focuses on investigating the effects of pruning and Sparse Recurrent Neural Networks on the performance of RNNs.
We first describe the pruning of RNNs, its impact on the performance of RNNs, and the number of training epochs required to regain accuracy after the pruning is performed.
Next, we continue with the creation and training of Sparse Recurrent Neural Networks and identify the relation between the performance and the graph properties of its underlying arbitrary structure.
arXiv Detail & Related papers (2024-07-30T07:24:58Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - Uncovering the Representation of Spiking Neural Networks Trained with
Surrogate Gradient [11.0542573074431]
Spiking Neural Networks (SNNs) are recognized as the candidate for the next-generation neural networks due to their bio-plausibility and energy efficiency.
Recently, researchers have demonstrated that SNNs are able to achieve nearly state-of-the-art performance in image recognition tasks using surrogate gradient training.
arXiv Detail & Related papers (2023-04-25T19:08:29Z) - Making a Spiking Net Work: Robust brain-like unsupervised machine
learning [0.0]
Spiking Neural Networks (SNNs) are an alternative to Artificial Neural Networks (ANNs)
SNNs struggle with dynamical stability and cannot match the accuracy of ANNs.
We show how an SNN can overcome many of the shortcomings that have been identified in the literature.
arXiv Detail & Related papers (2022-08-02T02:10:00Z) - Deep Time Delay Neural Network for Speech Enhancement with Full Data
Learning [60.20150317299749]
This paper proposes a deep time delay neural network (TDNN) for speech enhancement with full data learning.
To make full use of the training data, we propose a full data learning method for speech enhancement.
arXiv Detail & Related papers (2020-11-11T06:32:37Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - An Efficient Spiking Neural Network for Recognizing Gestures with a DVS
Camera on the Loihi Neuromorphic Processor [12.118084418840152]
Spiking Neural Networks (SNNs) have come under the spotlight for machine learning based applications.
We show our methodology for the design of an SNN that achieves nearly the same accuracy results as its corresponding Deep Neural Networks (DNNs)
Our SNN achieves 89.64% classification accuracy and occupies only 37 Loihi cores.
arXiv Detail & Related papers (2020-05-16T17:00:10Z) - Architecture Disentanglement for Deep Neural Networks [174.16176919145377]
We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
arXiv Detail & Related papers (2020-03-30T08:34:33Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.