A State-of-the-Art Review of Computational Models for Analyzing Longitudinal Wearable Sensor Data in Healthcare
- URL: http://arxiv.org/abs/2407.21665v1
- Date: Wed, 31 Jul 2024 15:08:15 GMT
- Title: A State-of-the-Art Review of Computational Models for Analyzing Longitudinal Wearable Sensor Data in Healthcare
- Authors: Paula Lago,
- Abstract summary: Long-term tracking, defined in the timescale of months of year, can provide insights of patterns and changes as indicators of health changes.
These insights can make medicine and healthcare more predictive, preventive, personalized, and participative (The 4P's)
- Score: 1.7872597573698263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wearable devices are increasingly used as tools for biomedical research, as the continuous stream of behavioral and physiological data they collect can provide insights about our health in everyday contexts. Long-term tracking, defined in the timescale of months of year, can provide insights of patterns and changes as indicators of health changes. These insights can make medicine and healthcare more predictive, preventive, personalized, and participative (The 4P's). However, the challenges in modeling, understanding and processing longitudinal data are a significant barrier to their adoption in research studies and clinical settings. In this paper, we review and discuss three models used to make sense of longitudinal data: routines, rhythms and stability metrics. We present the challenges associated with the processing and analysis of longitudinal wearable sensor data, with a special focus on how to handle the different temporal dynamics at various granularities. We then discuss current limitations and identify directions for future work. This review is essential to the advancement of computational modeling and analysis of longitudinal sensor data for pervasive healthcare.
Related papers
- A Survey of Few-Shot Learning for Biomedical Time Series [3.845248204742053]
Data-driven models have tremendous potential to assist clinical diagnosis and improve patient care.
An emerging approach to overcome the scarcity of labeled data is to augment AI methods with human-like capabilities to learn new tasks with limited examples, called few-shot learning.
This survey provides a comprehensive review and comparison of few-shot learning methods for biomedical time series applications.
arXiv Detail & Related papers (2024-05-03T21:22:27Z) - Recent Advances in Predictive Modeling with Electronic Health Records [71.19967863320647]
utilizing EHR data for predictive modeling presents several challenges due to its unique characteristics.
Deep learning has demonstrated its superiority in various applications, including healthcare.
arXiv Detail & Related papers (2024-02-02T00:31:01Z) - Clairvoyance: A Pipeline Toolkit for Medical Time Series [95.22483029602921]
Time-series learning is the bread and butter of data-driven *clinical decision support*
Clairvoyance proposes a unified, end-to-end, autoML-friendly pipeline that serves as a software toolkit.
Clairvoyance is the first to demonstrate viability of a comprehensive and automatable pipeline for clinical time-series ML.
arXiv Detail & Related papers (2023-10-28T12:08:03Z) - A review on longitudinal data analysis with random forest in precision
medicine [0.0]
Large scale omics data are useful for patient characterization, but often their measurements change over time, leading to longitudinal data.
Random forest is one of the state-of-the-art machine learning methods for building prediction models.
We review extensions of the standard random forest method for the purpose of longitudinal data analysis.
arXiv Detail & Related papers (2022-08-08T13:10:47Z) - Time Series Prediction using Deep Learning Methods in Healthcare [0.0]
Traditional machine learning methods face two main challenges in dealing with healthcare predictive analytics tasks.
The high-dimensional nature of healthcare data needs labor-intensive processes to select an appropriate set of features for each new task.
Recent deep learning methods have shown promising performance for various healthcare prediction tasks.
arXiv Detail & Related papers (2021-08-30T18:14:27Z) - Interpretable machine learning for high-dimensional trajectories of
aging health [0.0]
We have built a computational model for individual aging trajectories of health and survival.
It contains physical, functional, and biological variables, and is conditioned on demographic, lifestyle, and medical background information.
Our model is scalable to large longitudinal data sets and infers an interpretable network of directed interactions between the health variables.
arXiv Detail & Related papers (2021-05-07T17:42:15Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z) - Trajectories, bifurcations and pseudotime in large clinical datasets:
applications to myocardial infarction and diabetes data [94.37521840642141]
We suggest a semi-supervised methodology for the analysis of large clinical datasets, characterized by mixed data types and missing values.
The methodology is based on application of elastic principal graphs which can address simultaneously the tasks of dimensionality reduction, data visualization, clustering, feature selection and quantifying the geodesic distances (pseudotime) in partially ordered sequences of observations.
arXiv Detail & Related papers (2020-07-07T21:04:55Z) - DeepCoDA: personalized interpretability for compositional health data [58.841559626549376]
Interpretability allows the domain-expert to evaluate the model's relevance and reliability.
In the healthcare setting, interpretable models should implicate relevant biological mechanisms independent of technical factors.
We define personalized interpretability as a measure of sample-specific feature attribution.
arXiv Detail & Related papers (2020-06-02T05:14:22Z) - Patient Similarity Analysis with Longitudinal Health Data [0.5249805590164901]
Electronic health records contain time-resolved information about medical visits, tests and procedures, as well as outcomes.
By assessing the similarities among these journeys, it is possible to uncover clusters of common disease trajectories with shared health outcomes.
The assignment of patient journeys to specific clusters may in turn serve as the basis for personalized outcome prediction and treatment selection.
arXiv Detail & Related papers (2020-05-14T07:06:02Z) - Learning Dynamic and Personalized Comorbidity Networks from Event Data
using Deep Diffusion Processes [102.02672176520382]
Comorbid diseases co-occur and progress via complex temporal patterns that vary among individuals.
In electronic health records we can observe the different diseases a patient has, but can only infer the temporal relationship between each co-morbid condition.
We develop deep diffusion processes to model "dynamic comorbidity networks"
arXiv Detail & Related papers (2020-01-08T15:47:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.