MARAuder's Map: Motion-Aware Real-time Activity Recognition with Layout-Based Trajectories
- URL: http://arxiv.org/abs/2511.05773v1
- Date: Sat, 08 Nov 2025 00:07:43 GMT
- Title: MARAuder's Map: Motion-Aware Real-time Activity Recognition with Layout-Based Trajectories
- Authors: Zishuai Liu, Weihang You, Jin Lu, Fei Dou,
- Abstract summary: We propose a novel framework for real-time activity recognition from raw, unsegmented sensor streams.<n>Our method projects sensor activations onto the physical floorplan to generate trajectory-aware, image-like sequences.<n>To enhance temporal awareness, we introduce a learnable time embedding module that encodes contextual cues.
- Score: 3.788163163289351
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ambient sensor-based human activity recognition (HAR) in smart homes remains challenging due to the need for real-time inference, spatially grounded reasoning, and context-aware temporal modeling. Existing approaches often rely on pre-segmented, within-activity data and overlook the physical layout of the environment, limiting their robustness in continuous, real-world deployments. In this paper, we propose MARAuder's Map, a novel framework for real-time activity recognition from raw, unsegmented sensor streams. Our method projects sensor activations onto the physical floorplan to generate trajectory-aware, image-like sequences that capture the spatial flow of human movement. These representations are processed by a hybrid deep learning model that jointly captures spatial structure and temporal dependencies. To enhance temporal awareness, we introduce a learnable time embedding module that encodes contextual cues such as hour-of-day and day-of-week. Additionally, an attention-based encoder selectively focuses on informative segments within each observation window, enabling accurate recognition even under cross-activity transitions and temporal ambiguity. Extensive experiments on multiple real-world smart home datasets demonstrate that our method outperforms strong baselines, offering a practical solution for real-time HAR in ambient sensor environments.
Related papers
- Integrating Temporal Context into Streaming Data for Human Activity Recognition in Smart Home [3.1032184155196982]
Human Activity Recognition (HAR) from passive sensors mostly relies on traditional machine learning.<n>We tackle this by clustering activities into morning, afternoon, and night.<n>We propose to extend the feature vector by incorporating time of day and day of week as cyclical temporal features.
arXiv Detail & Related papers (2026-01-09T09:47:06Z) - Online Segment Any 3D Thing as Instance Tracking [60.20416622842975]
We reconceptualize online 3D segmentation as an instance tracking problem (AutoSeg3D)<n>We introduce spatial consistency learning to mitigate the fragmentation problem inherent in Vision Foundation Models.<n>Our method establishes a new state-of-the-art, surpassing ESAM by 2.8 AP on ScanNet200.
arXiv Detail & Related papers (2025-12-08T14:48:51Z) - Adaptive State-Space Mamba for Real-Time Sensor Data Anomaly Detection [2.922256022514318]
We propose an emphAdaptive State-Space Mamba framework for real-time sensor data anomaly detection.<n>Our approach is easily to other time-series tasks that demand rapid and reliable detection capabilities.
arXiv Detail & Related papers (2025-03-26T21:37:48Z) - DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal Forecasting [31.398965880415492]
Earth science systems rely heavily on the extensive deployment of sensors.<n>Traditional approaches to sensor deployment utilize specific algorithms to design and deploy sensors.<n>We introduce for the first time the concept of sparse-temporal data dynamic sparse training and are committed to adaptively, dynamically filtering important distributions sensor.
arXiv Detail & Related papers (2024-03-05T12:31:24Z) - LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry [53.5449912019877]
We present the Long-term Effective Any Point Tracking (LEAP) module.<n>LEAP innovatively combines visual, inter-track, and temporal cues with mindfully selected anchors for dynamic track estimation.<n>Based on these traits, we develop LEAP-VO, a robust visual odometry system adept at handling occlusions and dynamic scenes.
arXiv Detail & Related papers (2024-01-03T18:57:27Z) - Generalizing Event-Based Motion Deblurring in Real-World Scenarios [62.995994797897424]
Event-based motion deblurring has shown promising results by exploiting low-latency events.
We propose a scale-aware network that allows flexible input spatial scales and enables learning from different temporal scales of motion blur.
A two-stage self-supervised learning scheme is then developed to fit real-world data distribution.
arXiv Detail & Related papers (2023-08-11T04:27:29Z) - Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - A Spatio-Temporal Multilayer Perceptron for Gesture Recognition [70.34489104710366]
We propose a multilayer state-weighted perceptron for gesture recognition in the context of autonomous vehicles.
An evaluation of TCG and Drive&Act datasets is provided to showcase the promising performance of our approach.
We deploy our model to our autonomous vehicle to show its real-time capability and stable execution.
arXiv Detail & Related papers (2022-04-25T08:42:47Z) - Temporal Predictive Coding For Model-Based Planning In Latent Space [80.99554006174093]
We present an information-theoretic approach that employs temporal predictive coding to encode elements in the environment that can be predicted across time.
We evaluate our model on a challenging modification of standard DMControl tasks where the background is replaced with natural videos that contain complex but irrelevant information to the planning task.
arXiv Detail & Related papers (2021-06-14T04:31:15Z) - Human Activity Recognition from Wearable Sensor Data Using
Self-Attention [2.9023633922848586]
We present a self-attention based neural network model for activity recognition from body-worn sensor data.
We performed experiments on four popular publicly available HAR datasets: PAMAP2, Opportunity, Skoda and USC-HAD.
Our model achieve significant performance improvement over recent state-of-the-art models in both benchmark test subjects and Leave-one-out-subject evaluation.
arXiv Detail & Related papers (2020-03-17T14:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.