Interpretation of Deep Temporal Representations by Selective
Visualization of Internally Activated Nodes
- URL: http://arxiv.org/abs/2004.12538v2
- Date: Fri, 10 Jul 2020 05:08:29 GMT
- Title: Interpretation of Deep Temporal Representations by Selective
Visualization of Internally Activated Nodes
- Authors: Sohee Cho, Ginkyeng Lee, Wonjoon Chang and Jaesik Choi
- Abstract summary: We propose two new frameworks to visualize temporal representations learned from deep neural networks.
Our algorithm interprets the decision of temporal neural network by extracting highly activated periods.
We characterize such sub-sequences with clustering and calculate the uncertainty of the suggested type and actual data.
- Score: 24.228613156037532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently deep neural networks demonstrate competitive performances in
classification and regression tasks for many temporal or sequential data.
However, it is still hard to understand the classification mechanisms of
temporal deep neural networks. In this paper, we propose two new frameworks to
visualize temporal representations learned from deep neural networks. Given
input data and output, our algorithm interprets the decision of temporal neural
network by extracting highly activated periods and visualizes a sub-sequence of
input data which contributes to activate the units. Furthermore, we
characterize such sub-sequences with clustering and calculate the uncertainty
of the suggested type and actual data. We also suggest Layer-wise Relevance
from the output of a unit, not from the final output, with backward Monte-Carlo
dropout to show the relevance scores of each input point to activate units with
providing a visual representation of the uncertainty about this impact.
Related papers
- MTS2Graph: Interpretable Multivariate Time Series Classification with
Temporal Evolving Graphs [1.1756822700775666]
We introduce a new framework for interpreting time series data by extracting and clustering the input representative patterns.
We run experiments on eight datasets of the UCR/UEA archive, along with HAR and PAM datasets.
arXiv Detail & Related papers (2023-06-06T16:24:27Z) - Scalable Spatiotemporal Graph Neural Networks [14.415967477487692]
Graph neural networks (GNNs) are often the core component of the forecasting architecture.
In most pretemporal GNNs, the computational complexity scales up to a quadratic factor with the length of the sequence times the number of links in the graph.
We propose a scalable architecture that exploits an efficient encoding of both temporal and spatial dynamics.
arXiv Detail & Related papers (2022-09-14T09:47:38Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Exploring the Properties and Evolution of Neural Network Eigenspaces
during Training [0.0]
We show that problem difficulty and neural network capacity affect the predictive performance in an antagonistic manner.
We show that the observed effects are independent from previously reported pathological patterns like the tail pattern''
arXiv Detail & Related papers (2021-06-17T14:18:12Z) - Variational Structured Attention Networks for Deep Visual Representation
Learning [49.80498066480928]
We propose a unified deep framework to jointly learn both spatial attention maps and channel attention in a principled manner.
Specifically, we integrate the estimation and the interaction of the attentions within a probabilistic representation learning framework.
We implement the inference rules within the neural network, thus allowing for end-to-end learning of the probabilistic and the CNN front-end parameters.
arXiv Detail & Related papers (2021-03-05T07:37:24Z) - Ensemble perspective for understanding temporal credit assignment [1.9843222704723809]
We show that each individual connection in recurrent neural networks is modeled by a spike and slab distribution, rather than a precise weight value.
Our model reveals important connections that determine the overall performance of the network.
It is thus promising to study the temporal credit assignment in recurrent neural networks from the ensemble perspective.
arXiv Detail & Related papers (2021-02-07T08:14:05Z) - Representation Learning for Sequence Data with Deep Autoencoding
Predictive Components [96.42805872177067]
We propose a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space.
We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between past and future windows at each time step.
We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
arXiv Detail & Related papers (2020-10-07T03:34:01Z) - A Prospective Study on Sequence-Driven Temporal Sampling and Ego-Motion
Compensation for Action Recognition in the EPIC-Kitchens Dataset [68.8204255655161]
Action recognition is one of the top-challenging research fields in computer vision.
ego-motion recorded sequences have become of important relevance.
The proposed method aims to cope with it by estimating this ego-motion or camera motion.
arXiv Detail & Related papers (2020-08-26T14:44:45Z) - Implicit Saliency in Deep Neural Networks [15.510581400494207]
In this paper, we show that existing recognition and localization deep architectures are capable of predicting the human visual saliency.
We calculate this implicit saliency using expectancy-mismatch hypothesis in an unsupervised fashion.
Our experiments show that extracting saliency in this fashion provides comparable performance when measured against the state-of-art supervised algorithms.
arXiv Detail & Related papers (2020-08-04T23:14:24Z) - Forgetting Outside the Box: Scrubbing Deep Networks of Information
Accessible from Input-Output Observations [143.3053365553897]
We describe a procedure for removing dependency on a cohort of training data from a trained deep network.
We introduce a new bound on how much information can be extracted per query about the forgotten cohort.
We exploit the connections between the activation and weight dynamics of a DNN inspired by Neural Tangent Kernels to compute the information in the activations.
arXiv Detail & Related papers (2020-03-05T23:17:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.