Visualising Deep Network's Time-Series Representations
- URL: http://arxiv.org/abs/2103.07176v1
- Date: Fri, 12 Mar 2021 09:53:34 GMT
- Title: Visualising Deep Network's Time-Series Representations
- Authors: B{\l}a\.zej Leporowski and Alexandros Iosifidis
- Abstract summary: Despite the popularisation of machine learning models, more often than not they still operate as black boxes with no insight into what is happening inside the model.
In this paper, a method that addresses that issue is proposed, with a focus on visualising multi-dimensional time-series data.
Experiments on a high-frequency stock market dataset show that the method provides fast and discernible visualisations.
- Score: 93.73198973454944
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the popularisation of the machine learning models, more often than
not they still operate as black boxes with no insight into what is happening
inside the model. There exist a few methods that allow to visualise and explain
why the model has made a certain prediction. Those methods, however, allow
viewing the causal link between the input and output of the model without
presenting how the model learns to represent the data. In this paper, a method
that addresses that issue is proposed, with a focus on visualising
multi-dimensional time-series data. Experiments on a high-frequency stock
market dataset show that the method provides fast and discernible
visualisations. Large datasets can be visualised quickly and on one plot, which
makes it easy for a user to compare the learned representations of the data.
The developed method successfully combines known and proven techniques to
provide novel insight into the inner workings of time-series classifier models.
Related papers
- ViTime: A Visual Intelligence-Based Foundation Model for Time Series Forecasting [38.87384888881476]
This paper proposes ViTime, a novel Visual Intelligence-based foundation model for time series forecasting.
Experiments on a diverse set of previously unseen forecasting datasets demonstrate that ViTime achieves state-of-the-art zero-shot performance.
arXiv Detail & Related papers (2024-07-10T02:11:01Z) - Sequential Modeling Enables Scalable Learning for Large Vision Models [120.91839619284431]
We introduce a novel sequential modeling approach which enables learning a Large Vision Model (LVM) without making use of any linguistic data.
We define a common format, "visual sentences", in which we can represent raw images and videos as well as annotated data sources.
arXiv Detail & Related papers (2023-12-01T18:59:57Z) - LLM2Loss: Leveraging Language Models for Explainable Model Diagnostics [5.33024001730262]
We propose an approach that can provide semantic insights into a model's patterns of failures and biases.
We show that an ensemble of such lightweight models can be used to generate insights on the performance of the black-box model.
arXiv Detail & Related papers (2023-05-04T23:54:37Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - Perceptual Score: What Data Modalities Does Your Model Perceive? [73.75255606437808]
We introduce the perceptual score, a metric that assesses the degree to which a model relies on the different subsets of the input features.
We find that recent, more accurate multi-modal models for visual question-answering tend to perceive the visual data less than their predecessors.
Using the perceptual score also helps to analyze model biases by decomposing the score into data subset contributions.
arXiv Detail & Related papers (2021-10-27T12:19:56Z) - Generative Models as a Data Source for Multiview Representation Learning [38.56447220165002]
Generative models are capable of producing realistic images that look nearly indistinguishable from the data on which they are trained.
This raises the question: if we have good enough generative models, do we still need datasets?
We investigate this question in the setting of learning general-purpose visual representations from a black-box generative model.
arXiv Detail & Related papers (2021-06-09T17:54:55Z) - ViViT: A Video Vision Transformer [75.74690759089529]
We present pure-transformer based models for video classification.
Our model extracts-temporal tokens from the input video, which are then encoded by a series of transformer layers.
We show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets.
arXiv Detail & Related papers (2021-03-29T15:27:17Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Conditional Mutual information-based Contrastive Loss for Financial Time
Series Forecasting [12.0855096102517]
We present a representation learning framework for financial time series forecasting.
In this paper, we propose to first learn compact representations from time series data, then use the learned representations to train a simpler model for predicting time series movements.
arXiv Detail & Related papers (2020-02-18T15:24:33Z) - Interpretability of Blackbox Machine Learning Models through Dataview
Extraction and Shadow Model creation [4.456941846147708]
Different deep learning models built on the same training data may capture different views of the data based on the underlying techniques used.
For explaining the decisions arrived by blackbox deep learning models, we argue that it is essential to reproduce that model's view of the training data faithfully.
arXiv Detail & Related papers (2020-02-02T11:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.