Local Exceptionality Detection in Time Series Using Subgroup Discovery
- URL: http://arxiv.org/abs/2108.11751v1
- Date: Thu, 5 Aug 2021 17:19:51 GMT
- Title: Local Exceptionality Detection in Time Series Using Subgroup Discovery
- Authors: Dan Hudson and Travis J. Wiltshire and Martin Atzmueller
- Abstract summary: We present a novel approach for local exceptionality detection on time series data.
This method provides the ability to discover interpretable patterns in the data, which can be used to understand and predict the progression of a time series.
- Score: 0.5371337604556311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a novel approach for local exceptionality detection
on time series data. This method provides the ability to discover interpretable
patterns in the data, which can be used to understand and predict the
progression of a time series. This being an exploratory approach, the results
can be used to generate hypotheses about the relationships between the
variables describing a specific process and its dynamics. We detail our
approach in a concrete instantiation and exemplary implementation, specifically
in the field of teamwork research. Using a real-world dataset of team
interactions we include results from an example data analytics application of
our proposed approach, showcase novel analysis options, and discuss possible
implications of the results from the perspective of teamwork research.
Related papers
- The Devil is in the Spurious Correlations: Boosting Moment Retrieval with Dynamic Learning [49.40254251698784]
We propose a dynamic learning approach for moment retrieval, where two strategies are designed to mitigate the spurious correlation.
First, we introduce a novel video synthesis approach to construct a dynamic context for the queried moment.
Second, to alleviate the over-association with backgrounds, we enhance representations temporally by incorporating text-dynamics interaction.
arXiv Detail & Related papers (2025-01-13T13:13:06Z) - Pairwise Spatiotemporal Partial Trajectory Matching for Co-movement Analysis [1.0942776587291776]
Pairwise movement analysis involves identifying individuals within specific time frames.
We propose a novel method for partialtemporal matching that transforms data into interpretable images based on time windows.
We evaluate our method on a co-walking classification task, demonstrating its effectiveness in a novel co-behavior identification application.
This approach offers a powerful, interpretable framework fortemporal behavior analysis, with potential applications in social behavior research, urban planning, and healthcare.
arXiv Detail & Related papers (2024-12-03T22:25:44Z) - Context is Key: A Benchmark for Forecasting with Essential Textual Information [87.3175915185287]
"Context is Key" (CiK) is a time series forecasting benchmark that pairs numerical data with diverse types of carefully crafted textual context.
We evaluate a range of approaches, including statistical models, time series foundation models, and LLM-based forecasters.
Our experiments highlight the importance of incorporating contextual information, demonstrate surprising performance when using LLM-based forecasting models, and also reveal some of their critical shortcomings.
arXiv Detail & Related papers (2024-10-24T17:56:08Z) - A Survey on Diffusion Models for Time Series and Spatio-Temporal Data [92.1255811066468]
We review the use of diffusion models in time series and S-temporal data, categorizing them by model, task type, data modality, and practical application domain.
We categorize diffusion models into unconditioned and conditioned types discuss time series and S-temporal data separately.
Our survey covers their application extensively in various fields including healthcare, recommendation, climate, energy, audio, and transportation.
arXiv Detail & Related papers (2024-04-29T17:19:40Z) - generAItor: Tree-in-the-Loop Text Generation for Language Model
Explainability and Adaptation [28.715001906405362]
Large language models (LLMs) are widely deployed in various downstream tasks, e.g., auto-completion, aided writing, or chat-based text generation.
We tackle this shortcoming by proposing a tree-in-the-loop approach, where a visual representation of the beam search tree is the central component for analyzing, explaining, and adapting the generated outputs.
We present generAItor, a visual analytics technique, augmenting the central beam search tree with various task-specific widgets, providing targeted visualizations and interaction possibilities.
arXiv Detail & Related papers (2024-03-12T13:09:15Z) - Assessing Privacy Risks in Language Models: A Case Study on
Summarization Tasks [65.21536453075275]
We focus on the summarization task and investigate the membership inference (MI) attack.
We exploit text similarity and the model's resistance to document modifications as potential MI signals.
We discuss several safeguards for training summarization models to protect against MI attacks and discuss the inherent trade-off between privacy and utility.
arXiv Detail & Related papers (2023-10-20T05:44:39Z) - Extracting Interpretable Local and Global Representations from Attention
on Time Series [0.135975510645475]
This paper targets two transformer attention based interpretability methods working with local abstraction and global representation.
We distinguish local and global contexts, and provide a comprehensive framework for both general interpretation options.
arXiv Detail & Related papers (2023-09-16T00:51:49Z) - DANLIP: Deep Autoregressive Networks for Locally Interpretable
Probabilistic Forecasting [0.0]
We propose a novel deep learning-based probabilistic time series forecasting architecture that is intrinsically interpretable.
We show that our model is not only interpretable but also provides comparable performance to state-of-the-art probabilistic time series forecasting methods.
arXiv Detail & Related papers (2023-01-05T23:40:23Z) - A Unified Comparison of User Modeling Techniques for Predicting Data
Interaction and Detecting Exploration Bias [17.518601254380275]
We compare and rank eight user modeling algorithms based on their performance on a diverse set of four user study datasets.
Based on our findings, we highlight open challenges and new directions for analyzing user interactions and visualization provenance.
arXiv Detail & Related papers (2022-08-09T19:51:10Z) - Temporal Relevance Analysis for Video Action Models [70.39411261685963]
We first propose a new approach to quantify the temporal relationships between frames captured by CNN-based action models.
We then conduct comprehensive experiments and in-depth analysis to provide a better understanding of how temporal modeling is affected.
arXiv Detail & Related papers (2022-04-25T19:06:48Z) - Self-Attention Neural Bag-of-Features [103.70855797025689]
We build on the recently introduced 2D-Attention and reformulate the attention learning methodology.
We propose a joint feature-temporal attention mechanism that learns a joint 2D attention mask highlighting relevant information.
arXiv Detail & Related papers (2022-01-26T17:54:14Z) - An Empirical Study: Extensive Deep Temporal Point Process [61.14164208094238]
We first review recent research emphasis and difficulties in modeling asynchronous event sequences with deep temporal point process.
We propose a Granger causality discovery framework for exploiting the relations among multi-types of events.
arXiv Detail & Related papers (2021-10-19T10:15:00Z) - Deep Neural Approaches to Relation Triplets Extraction: A Comprehensive
Survey [22.586079965178975]
We focus on relation extraction using deep neural networks on publicly available datasets.
We cover sentence-level relation extraction to document-level relation extraction, pipeline-based approaches to joint extraction approaches, annotated datasets to distantly supervised datasets.
Regarding neural architectures, we cover convolutional models, recurrent network models, attention network models, and graph convolutional models in this survey.
arXiv Detail & Related papers (2021-03-31T09:27:15Z) - CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural
Summarization Systems [121.78477833009671]
We investigate the performance of different summarization models under a cross-dataset setting.
A comprehensive study of 11 representative summarization systems on 5 datasets from different domains reveals the effect of model architectures and generation ways.
arXiv Detail & Related papers (2020-10-11T02:19:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.