Efficient and Personalized Mobile Health Event Prediction via Small Language Models
- URL: http://arxiv.org/abs/2409.18987v1
- Date: Tue, 17 Sep 2024 01:57:57 GMT
- Title: Efficient and Personalized Mobile Health Event Prediction via Small Language Models
- Authors: Xin Wang, Ting Dang, Vassilis Kostakos, Hong Jia,
- Abstract summary: Small Language Models (SLMs) are potential candidates to solve privacy and computational issues.
This paper examines the capability of SLMs to accurately analyze health data, such as steps, calories, sleep minutes, and other vital statistics.
Our results indicate that SLMs could potentially be deployed on wearable or mobile devices for real-time health monitoring.
- Score: 14.032049217103024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Healthcare monitoring is crucial for early detection, timely intervention, and the ongoing management of health conditions, ultimately improving individuals' quality of life. Recent research shows that Large Language Models (LLMs) have demonstrated impressive performance in supporting healthcare tasks. However, existing LLM-based healthcare solutions typically rely on cloud-based systems, which raise privacy concerns and increase the risk of personal information leakage. As a result, there is growing interest in running these models locally on devices like mobile phones and wearables to protect users' privacy. Small Language Models (SLMs) are potential candidates to solve privacy and computational issues, as they are more efficient and better suited for local deployment. However, the performance of SLMs in healthcare domains has not yet been investigated. This paper examines the capability of SLMs to accurately analyze health data, such as steps, calories, sleep minutes, and other vital statistics, to assess an individual's health status. Our results show that, TinyLlama, which has 1.1 billion parameters, utilizes 4.31 GB memory, and has 0.48s latency, showing the best performance compared other four state-of-the-art (SOTA) SLMs on various healthcare applications. Our results indicate that SLMs could potentially be deployed on wearable or mobile devices for real-time health monitoring, providing a practical solution for efficient and privacy-preserving healthcare.
Related papers
- Medicine on the Edge: Comparative Performance Analysis of On-Device LLMs for Clinical Reasoning [1.6010529993238123]
We benchmark publicly available on-device Large Language Models (LLM) using the AMEGA dataset.
Our results indicate that compact general-purpose models like Phi-3 Mini achieve a strong balance between speed and accuracy.
We emphasize the need for more efficient inference and models tailored to real-world clinical reasoning.
arXiv Detail & Related papers (2025-02-13T04:35:55Z) - Question Answering on Patient Medical Records with Private Fine-Tuned LLMs [1.8524621910043437]
Large Language Models (LLMs) enable semantic question answering (QA) over medical data.
ensuring privacy and compliance requires edge and private deployments of LLMs.
We evaluate privately hosted, fine-tuned LLMs against benchmark models such as GPT-4 and GPT-4o.
arXiv Detail & Related papers (2025-01-23T14:13:56Z) - Benchmarking LLMs and SLMs for patient reported outcomes [0.0]
This study benchmarks several SLMs against LLMs for summarizing patient-reported Q&A forms in the context of radiotherapy.
Using various metrics, we evaluate their precision and reliability.
The findings highlight both the promise and limitations of SLMs for high-stakes medical tasks, fostering more efficient and privacy-preserving AI-driven healthcare solutions.
arXiv Detail & Related papers (2024-12-20T19:01:25Z) - Harnessing the Digital Revolution: A Comprehensive Review of mHealth Applications for Remote Monitoring in Transforming Healthcare Delivery [1.03590082373586]
The review highlights various types of mHealth applications used for remote monitoring, such as telemedicine platforms, mobile apps for chronic disease management, and wearable devices.
The benefits of these applications include improved patient outcomes, increased access to healthcare, reduced healthcare costs, and addressing healthcare disparities.
However, challenges and limitations, such as privacy and security concerns, lack of technical infrastructure, regulatory is-sues, data accuracy, user adherence, and the digital divide, need to be addressed.
arXiv Detail & Related papers (2024-08-26T11:32:43Z) - STLLaVA-Med: Self-Training Large Language and Vision Assistant for Medical Question-Answering [58.79671189792399]
STLLaVA-Med is designed to train a policy model capable of auto-generating medical visual instruction data.
We validate the efficacy and data efficiency of STLLaVA-Med across three major medical Visual Question Answering (VQA) benchmarks.
arXiv Detail & Related papers (2024-06-28T15:01:23Z) - Deep Reinforcement Learning Empowered Activity-Aware Dynamic Health
Monitoring Systems [69.41229290253605]
Existing monitoring approaches were designed on the premise that medical devices track several health metrics concurrently.
This means that they report all relevant health values within that scope, which can result in excess resource use and the gathering of extraneous data.
We propose Dynamic Activity-Aware Health Monitoring strategy (DActAHM) for striking a balance between optimal monitoring performance and cost efficiency.
arXiv Detail & Related papers (2024-01-19T16:26:35Z) - Enhancing Small Medical Learners with Privacy-preserving Contextual Prompting [24.201549275369487]
We present a method that harnesses large language models' medical expertise to boost SLM performance in medical tasks under privacy-restricted scenarios.
Specifically, we mitigate patient privacy issues by extracting keywords from medical data and prompting the LLM to generate a medical knowledge-intensive context.
Our method significantly enhances performance in both few-shot and full training settings across three medical knowledge-intensive tasks.
arXiv Detail & Related papers (2023-05-22T05:14:38Z) - A Comprehensive Picture of Factors Affecting User Willingness to Use
Mobile Health Applications [62.60524178293434]
The aim of this paper is to investigate the factors that influence user acceptance of mHealth apps.
Users' digital literacy has the strongest impact on their willingness to use them, followed by their online habit of sharing personal information.
Users' demographic background, such as their country of residence, age, ethnicity, and education, has a significant moderating effect.
arXiv Detail & Related papers (2023-05-10T08:11:21Z) - Large Language Models for Healthcare Data Augmentation: An Example on
Patient-Trial Matching [49.78442796596806]
We propose an innovative privacy-aware data augmentation approach for patient-trial matching (LLM-PTM)
Our experiments demonstrate a 7.32% average improvement in performance using the proposed LLM-PTM method, and the generalizability to new data is improved by 12.12%.
arXiv Detail & Related papers (2023-03-24T03:14:00Z) - SPeC: A Soft Prompt-Based Calibration on Performance Variability of
Large Language Model in Clinical Notes Summarization [50.01382938451978]
We introduce a model-agnostic pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization.
Experimental findings indicate that our method not only bolsters performance but also effectively curbs variance for various language models.
arXiv Detail & Related papers (2023-03-23T04:47:46Z) - Assessing the Severity of Health States based on Social Media Posts [62.52087340582502]
We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the user's health state.
The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a user's health.
arXiv Detail & Related papers (2020-09-21T03:45:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.