Online Meal Detection Based on CGM Data Dynamics
- URL: http://arxiv.org/abs/2507.00080v1
- Date: Sun, 29 Jun 2025 14:24:05 GMT
- Title: Online Meal Detection Based on CGM Data Dynamics
- Authors: Ali Tavasoli, Heman Shakeri,
- Abstract summary: We utilize dynamical modes as features derived from Continuous Glucose Monitoring (CGM) data to detect meal events.<n>This approach not only improves the accuracy of meal detection but also enhances the interpretability of the underlying glucose dynamics.
- Score: 0.35516599670943777
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We utilize dynamical modes as features derived from Continuous Glucose Monitoring (CGM) data to detect meal events. By leveraging the inherent properties of underlying dynamics, these modes capture key aspects of glucose variability, enabling the identification of patterns and anomalies associated with meal consumption. This approach not only improves the accuracy of meal detection but also enhances the interpretability of the underlying glucose dynamics. By focusing on dynamical features, our method provides a robust framework for feature extraction, facilitating generalization across diverse datasets and ensuring reliable performance in real-world applications. The proposed technique offers significant advantages over traditional approaches, improving detection accuracy,
Related papers
- Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models [70.03122709795122]
Previous backdoor detection methods primarily focus on the static features of backdoor samples.<n>This study introduces a novel backdoor detection perspective named Dynamic Attention Analysis (DAA), showing that these dynamic characteristics serve as better indicators for backdoor detection.<n>Our approach significantly surpasses existing detection methods, achieving an average F1 Score of 79.49% and an AUC of 87.67%.
arXiv Detail & Related papers (2025-04-29T07:59:35Z) - Skeleton-Based Intake Gesture Detection With Spatial-Temporal Graph Convolutional Networks [1.5228527154365612]
This study introduces a skeleton based approach using a model that combines a dilated spatial-temporal graph convolutional network (ST-GCN) with a bidirectional long-short-term memory (BiLSTM) framework to detect intake gestures.<n>The results confirm the feasibility of utilizing skeleton data for intake gesture detection and highlight the robustness of the proposed approach in cross-dataset validation.
arXiv Detail & Related papers (2025-04-14T18:35:32Z) - AVadCLIP: Audio-Visual Collaboration for Robust Video Anomaly Detection [57.649223695021114]
We present a novel weakly supervised framework that leverages audio-visual collaboration for robust video anomaly detection.<n>Our framework demonstrates superior performance across multiple benchmarks, with audio integration significantly boosting anomaly detection accuracy.
arXiv Detail & Related papers (2025-04-06T13:59:16Z) - RAAD-LLM: Adaptive Anomaly Detection Using LLMs and RAG Integration [2.879328762187361]
We present RAAD-LLM, a novel framework for adaptive anomaly detection.<n>By effectively utilizing domain-specific knowledge, RAAD-LLM enhances the detection of anomalies in time series data.<n>Results show significant improvements over our previous model with an accuracy increase from 70.7% to 88.6% on the real-world dataset.
arXiv Detail & Related papers (2025-03-04T17:20:43Z) - A Generalizable Anomaly Detection Method in Dynamic Graphs [7.48376611870513]
GeneralDyG is a method that samples temporal ego-graphs and sequentially extracts structural and temporal features.<n>Our proposed GeneralDyG significantly outperforms state-of-the-art methods on four real-world datasets.
arXiv Detail & Related papers (2024-12-21T02:38:48Z) - Object Style Diffusion for Generalized Object Detection in Urban Scene [69.04189353993907]
We introduce a novel single-domain object detection generalization method, named GoDiff.<n>By integrating pseudo-target domain data with source domain data, we diversify the training dataset.<n> Experimental results demonstrate that our method not only enhances the generalization ability of existing detectors but also functions as a plug-and-play enhancement for other single-domain generalization methods.
arXiv Detail & Related papers (2024-12-18T13:03:00Z) - GlucoBench: Curated List of Continuous Glucose Monitoring Datasets with Prediction Benchmarks [0.12564343689544843]
Continuous glucose monitors (CGM) are small medical devices that measure blood glucose levels at regular intervals.
Forecasting of glucose trajectories based on CGM data holds the potential to substantially improve diabetes management.
arXiv Detail & Related papers (2024-10-08T08:01:09Z) - DyG-Mamba: Continuous State Space Modeling on Dynamic Graphs [59.434893231950205]
Dynamic graph learning aims to uncover evolutionary laws in real-world systems.
We propose DyG-Mamba, a new continuous state space model for dynamic graph learning.
We show that DyG-Mamba achieves state-of-the-art performance on most datasets.
arXiv Detail & Related papers (2024-08-13T15:21:46Z) - Sequential Attention Source Identification Based on Feature
Representation [88.05527934953311]
This paper proposes a sequence-to-sequence based localization framework called Temporal-sequence based Graph Attention Source Identification (TGASI) based on an inductive learning idea.
It's worth mentioning that the inductive learning idea ensures that TGASI can detect the sources in new scenarios without knowing other prior knowledge.
arXiv Detail & Related papers (2023-06-28T03:00:28Z) - Sparsity in Continuous-Depth Neural Networks [2.969794498016257]
We study the influence of weight and feature sparsity on forecasting and on identifying the underlying dynamical laws.
We curate real-world datasets consisting of human motion capture and human hematopoiesis single-cell RNA-seq data.
arXiv Detail & Related papers (2022-10-26T12:48:12Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.