WormKAN: Are KAN Effective for Identifying and Tracking Concept Drift in   Time Series?
        - URL: http://arxiv.org/abs/2410.10041v2
 - Date: Fri, 13 Dec 2024 00:23:09 GMT
 - Title: WormKAN: Are KAN Effective for Identifying and Tracking Concept Drift in   Time Series?
 - Authors: Kunpeng Xu, Lifei Chen, Shengrui Wang, 
 - Abstract summary: WormKAN is a concept-aware KAN-based model to address concept drift in co-evolving time series.<n>WormKAN consists of three key components: Patch Normalization, Temporal Representation Module, and Concept Dynamics.
 - Score: 6.4314326272535896
 - License: http://creativecommons.org/licenses/by-nc-sa/4.0/
 - Abstract:   Dynamic concepts in time series are crucial for understanding complex systems such as financial markets, healthcare, and online activity logs. These concepts help reveal structures and behaviors in sequential data for better decision-making and forecasting. However, existing models often struggle to detect and track concept drift due to limitations in interpretability and adaptability. To address this challenge, inspired by the flexibility of the recent Kolmogorov-Arnold Network (KAN), we propose WormKAN, a concept-aware KAN-based model to address concept drift in co-evolving time series. WormKAN consists of three key components: Patch Normalization, Temporal Representation Module, and Concept Dynamics. Patch normalization processes co-evolving time series into patches, treating them as fundamental modeling units to capture local dependencies while ensuring consistent scaling. The temporal representation module learns robust latent representations by leveraging a KAN-based autoencoder, complemented by a smoothness constraint, to uncover inter-patch correlations. Concept dynamics identifies and tracks dynamic transitions, revealing structural shifts in the time series through concept identification and drift detection. These transitions, akin to passing through a \textit{wormhole}, are identified by abrupt changes in the latent space. Experiments show that KAN and KAN-based models (WormKAN) effectively segment time series into meaningful concepts, enhancing the identification and tracking of concept drift. 
 
       
      
        Related papers
        - Unify and Anchor: A Context-Aware Transformer for Cross-Domain Time   Series Forecasting [26.59526791215]
We identify two key challenges in cross-domain time series forecasting: the complexity of temporal patterns and semantic misalignment.
We propose the Unify and Anchor" transfer paradigm, which disentangles frequency components for a unified perspective.
We introduce ContexTST, a Transformer-based model that employs a time series coordinator for structured representation.
arXiv  Detail & Related papers  (2025-03-03T04:11:14Z) - Community-Aware Temporal Walks: Parameter-Free Representation Learning   on Continuous-Time Dynamic Graphs [3.833708891059351]
Community-aware Temporal Walks (CTWalks) is a novel framework for representation learning on continuous-time dynamic graphs.
CTWalks integrates a community-based parameter-free temporal walk sampling mechanism, an anonymization strategy enriched with community labels, and an encoding process.
 Experiments on benchmark datasets demonstrate that CTWalks outperforms established methods in temporal link prediction tasks.
arXiv  Detail & Related papers  (2025-01-21T04:16:46Z) - CORAL: Concept Drift Representation Learning for Co-evolving Time-series [6.4314326272535896]
Concept drift affects the reliability and accuracy of conventional analysis models.
This paper presents CORAL, a method that models time series as an evolving ecosystem to learn representations of concept drift.
arXiv  Detail & Related papers  (2025-01-02T15:09:00Z) - How to Continually Adapt Text-to-Image Diffusion Models for Flexible   Customization? [91.49559116493414]
We propose a novel Concept-Incremental text-to-image Diffusion Model (CIDM)
It can resolve catastrophic forgetting and concept neglect to learn new customization tasks in a concept-incremental manner.
 Experiments validate that our CIDM surpasses existing custom diffusion models.
arXiv  Detail & Related papers  (2024-10-23T06:47:29Z) - Wormhole: Concept-Aware Deep Representation Learning for Co-Evolving   Sequences [6.4314326272535896]
This paper introduces Wormhole, a novel deep representation learning framework that is concept-aware and designed for co-evolving time sequences.
 concept transitions are detected by identifying abrupt changes in the latent space, signifying a shift to new behavior.
This novel mechanism accurately discerns concepts within co-evolving sequences and pinpoints the exact locations of these wormholes.
arXiv  Detail & Related papers  (2024-09-20T19:11:39Z) - Temporal Feature Matters: A Framework for Diffusion Model Quantization [105.3033493564844]
Diffusion models rely on the time-step for the multi-round denoising.
We introduce a novel quantization framework that includes three strategies.
This framework preserves most of the temporal information and ensures high-quality end-to-end generation.
arXiv  Detail & Related papers  (2024-07-28T17:46:15Z) - ECATS: Explainable-by-design concept-based anomaly detection for time   series [0.5956301166481089]
We propose ECATS, a concept-based neuro-symbolic architecture where concepts are represented as Signal Temporal Logic (STL) formulae.
We show that our model is able to achieve great classification performance while ensuring local interpretability.
arXiv  Detail & Related papers  (2024-05-17T08:12:53Z) - Attractor Memory for Long-Term Time Series Forecasting: A Chaos   Perspective [63.60312929416228]
textbftextitAttraos incorporates chaos theory into long-term time series forecasting.
We show that Attraos outperforms various LTSF methods on mainstream datasets and chaotic datasets with only one-twelfth of the parameters compared to PatchTST.
arXiv  Detail & Related papers  (2024-02-18T05:35:01Z) - A Remark on Concept Drift for Dependent Data [7.0072935721154614]
We show that the temporal dependencies are strongly influencing the sampling process.
In particular, we show that the notion of stationarity is not suited for this setup and discuss alternatives.
arXiv  Detail & Related papers  (2023-12-15T21:11:46Z) - Uncovering the Missing Pattern: Unified Framework Towards Trajectory
  Imputation and Prediction [60.60223171143206]
Trajectory prediction is a crucial undertaking in understanding entity movement or human behavior from observed sequences.
Current methods often assume that the observed sequences are complete while ignoring the potential for missing values.
This paper presents a unified framework, the Graph-based Conditional Variational Recurrent Neural Network (GC-VRNN), which can perform trajectory imputation and prediction simultaneously.
arXiv  Detail & Related papers  (2023-03-28T14:27:27Z) - A Dynamic Temporal Self-attention Graph Convolutional Network for
  Traffic Prediction [7.23135508361981]
This paper proposes a temporal self-attention graph convolutional network (DT-SGN) model which considers the adjacent matrix as a trainable attention score matrix.
Experiments demonstrate the superiority of our method over state-of-art model-driven model and data-driven models on real-world traffic datasets.
arXiv  Detail & Related papers  (2023-02-21T03:51:52Z) - SpatioTemporal Focus for Skeleton-based Action Recognition [66.8571926307011]
Graph convolutional networks (GCNs) are widely adopted in skeleton-based action recognition.
We argue that the performance of recent proposed skeleton-based action recognition methods is limited by the following factors.
Inspired by the recent attention mechanism, we propose a multi-grain contextual focus module, termed MCF, to capture the action associated relation information.
arXiv  Detail & Related papers  (2022-03-31T02:45:24Z) - Modeling Temporal Concept Receptive Field Dynamically for Untrimmed
  Video Analysis [105.06166692486674]
We study temporal concept receptive field of concept-based event representation.
We introduce temporal dynamic convolution (TDC) to give stronger flexibility to concept-based event analytics.
Different coefficients can generate appropriate and accurate temporal concept receptive field size according to input videos.
arXiv  Detail & Related papers  (2021-11-23T04:59:48Z) - Learning Parameter Distributions to Detect Concept Drift in Data Streams [13.20231558027132]
We propose a novel framework for the detection of real concept drift, called ERICS.
By treating the parameters of a predictive model as random variables, we show that concept drift corresponds to a change in the distribution of optimal parameters.
 ERICS is also capable to detect concept drift at the input level, which is a significant advantage over existing approaches.
arXiv  Detail & Related papers  (2020-10-19T11:19:16Z) - A Prospective Study on Sequence-Driven Temporal Sampling and Ego-Motion
  Compensation for Action Recognition in the EPIC-Kitchens Dataset [68.8204255655161]
Action recognition is one of the top-challenging research fields in computer vision.
 ego-motion recorded sequences have become of important relevance.
The proposed method aims to cope with it by estimating this ego-motion or camera motion.
arXiv  Detail & Related papers  (2020-08-26T14:44:45Z) - Counterfactual Explanations of Concept Drift [11.53362411363005]
concept drift refers to the phenomenon that the distribution, which is underlying the observed data, changes over time.
We present a novel technology, which characterizes concept drift in terms of the characteristic change of spatial features represented by typical examples.
arXiv  Detail & Related papers  (2020-06-23T08:27:57Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
  Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv  Detail & Related papers  (2020-06-19T21:04:47Z) - Dynamic Inference: A New Approach Toward Efficient Video Action
  Recognition [69.9658249941149]
Action recognition in videos has achieved great success recently, but it remains a challenging task due to the massive computational cost.
We propose a general dynamic inference idea to improve inference efficiency by leveraging the variation in the distinguishability of different videos.
arXiv  Detail & Related papers  (2020-02-09T11:09:56Z) - Spatial-Temporal Transformer Networks for Traffic Flow Forecasting [74.76852538940746]
We propose a novel paradigm of Spatial-Temporal Transformer Networks (STTNs) to improve the accuracy of long-term traffic forecasting.
Specifically, we present a new variant of graph neural networks, named spatial transformer, by dynamically modeling directed spatial dependencies.
The proposed model enables fast and scalable training over a long range spatial-temporal dependencies.
arXiv  Detail & Related papers  (2020-01-09T10:21:04Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.