Spatiotemporal k-means
- URL: http://arxiv.org/abs/2211.05337v2
- Date: Mon, 15 Apr 2024 00:19:41 GMT
- Title: Spatiotemporal k-means
- Authors: Olga Dorabiala, Devavrat Vivek Dabke, Jennifer Webster, Nathan Kutz, Aleksandr Aravkin,
- Abstract summary: We propose a twotemporal clustering method called k-means (STk) that is able to analyze multi-scale clusters.
We show how STkM can be extended to more complex machine learning tasks, particularly unsupervised region of interest detection and tracking in videos.
- Score: 39.98633724527769
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatiotemporal data is increasingly available due to emerging sensor and data acquisition technologies that track moving objects. Spatiotemporal clustering addresses the need to efficiently discover patterns and trends in moving object behavior without human supervision. One application of interest is the discovery of moving clusters, where clusters have a static identity, but their location and content can change over time. We propose a two phase spatiotemporal clustering method called spatiotemporal k-means (STkM) that is able to analyze the multi-scale relationships within spatiotemporal data. By optimizing an objective function that is unified over space and time, the method can track dynamic clusters at both short and long timescales with minimal parameter tuning and no post-processing. We begin by proposing a theoretical generating model for spatiotemporal data and prove the efficacy of STkM in this setting. We then evaluate STkM on a recently developed collective animal behavior benchmark dataset and show that STkM outperforms baseline methods in the low-data limit, which is a critical regime of consideration in many emerging applications. Finally, we showcase how STkM can be extended to more complex machine learning tasks, particularly unsupervised region of interest detection and tracking in videos.
Related papers
- Spatial-Temporal Cross-View Contrastive Pre-training for Check-in Sequence Representation Learning [21.580705078081078]
We propose a novel Spatial-Temporal Cross-view Contrastive Representation (ST CCR) framework for check-in sequence representation learning.
ST CCR employs self-supervision from "spatial topic" and "temporal intention" views, facilitating effective fusion of spatial and temporal information at the semantic level.
We extensively evaluate ST CCR on three real-world datasets and demonstrate its superior performance across three downstream tasks.
arXiv Detail & Related papers (2024-07-22T10:20:34Z) - Concrete Dense Network for Long-Sequence Time Series Clustering [4.307648859471193]
Time series clustering is fundamental in data analysis for discovering temporal patterns.
Deep temporal clustering methods have been trying to integrate the canonical k-means into end-to-end training of neural networks.
LoSTer is a novel dense autoencoder architecture for the long-sequence time series clustering problem.
arXiv Detail & Related papers (2024-05-08T12:31:35Z) - Autoregressive Queries for Adaptive Tracking with Spatio-TemporalTransformers [55.46413719810273]
rich-temporal information is crucial to the complicated target appearance in visual tracking.
Our method improves the tracker's performance on six popular tracking benchmarks.
arXiv Detail & Related papers (2024-03-15T02:39:26Z) - Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - Tracking Objects and Activities with Attention for Temporal Sentence
Grounding [51.416914256782505]
Temporal sentence (TSG) aims to localize the temporal segment which is semantically aligned with a natural language query in an untrimmed segment.
We propose a novel Temporal Sentence Tracking Network (TSTNet), which contains (A) a Cross-modal Targets Generator to generate multi-modal and search space, and (B) a Temporal Sentence Tracker to track multi-modal targets' behavior and to predict query-related segment.
arXiv Detail & Related papers (2023-02-21T16:42:52Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - Time Series Clustering for Human Behavior Pattern Mining [11.906475748246532]
We propose a novel clustering approach for modeling human behavior from time-series data.
For mining frequent human behavior patterns effectively, we utilize a three-stage pipeline.
Empirical studies on two real-world datasets and a simulated dataset demonstrate the effectiveness of MTpattern.
arXiv Detail & Related papers (2021-10-14T17:19:35Z) - Neural Ordinary Differential Equation Model for Evolutionary Subspace
Clustering and Its Applications [36.700813256689656]
We propose a neural ODE model for evolutionary subspace clustering to overcome this limitation.
We demonstrate that this method can not only interpolate data at any time step for the evolutionary subspace clustering task, but also achieve higher accuracy than other state-of-the-art methods.
arXiv Detail & Related papers (2021-07-22T07:02:03Z) - Clustering of Time Series Data with Prior Geographical Information [0.26651200086513094]
We propose a spatial-temporal clustering model, where time series data based on spatial and temporal contexts.
The proposed model Spatial-DEC (S-DEC) use prior geographical information in building latent feature representations.
The results show that the proposed Spatial-DEC can find more desired-temporal clusters.
arXiv Detail & Related papers (2021-07-03T00:19:17Z) - A Spatial-Temporal Attentive Network with Spatial Continuity for
Trajectory Prediction [74.00750936752418]
We propose a novel model named spatial-temporal attentive network with spatial continuity (STAN-SC)
First, spatial-temporal attention mechanism is presented to explore the most useful and important information.
Second, we conduct a joint feature sequence based on the sequence and instant state information to make the generative trajectories keep spatial continuity.
arXiv Detail & Related papers (2020-03-13T04:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.