Guiding IoT-Based Healthcare Alert Systems with Large Language Models
- URL: http://arxiv.org/abs/2408.13071v1
- Date: Fri, 23 Aug 2024 13:55:36 GMT
- Title: Guiding IoT-Based Healthcare Alert Systems with Large Language Models
- Authors: Yulan Gao, Ziqiang Ye, Ming Xiao, Yue Xiao, Dong In Kim,
- Abstract summary: Healthcare alert systems (HAS) are undergoing rapid evolution, propelled by advancements in artificial intelligence (AI), Internet of Things (IoT) technologies, and increasing health consciousness.
Despite significant progress, a fundamental challenge remains: balancing the accuracy of personalized health alerts with stringent privacy protection in HAS environments constrained by resources.
We introduce a uniform framework, LLM-HAS, which incorporates Large Language Models (LLM) into HAS to significantly boost the accuracy, ensure user privacy, and enhance personalized health service.
- Score: 22.54714587190204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Healthcare alert systems (HAS) are undergoing rapid evolution, propelled by advancements in artificial intelligence (AI), Internet of Things (IoT) technologies, and increasing health consciousness. Despite significant progress, a fundamental challenge remains: balancing the accuracy of personalized health alerts with stringent privacy protection in HAS environments constrained by resources. To address this issue, we introduce a uniform framework, LLM-HAS, which incorporates Large Language Models (LLM) into HAS to significantly boost the accuracy, ensure user privacy, and enhance personalized health service, while also improving the subjective quality of experience (QoE) for users. Our innovative framework leverages a Mixture of Experts (MoE) approach, augmented with LLM, to analyze users' personalized preferences and potential health risks from additional textual job descriptions. This analysis guides the selection of specialized Deep Reinforcement Learning (DDPG) experts, tasked with making precise health alerts. Moreover, LLM-HAS can process Conversational User Feedback, which not only allows fine-tuning of DDPG but also deepen user engagement, thereby enhancing both the accuracy and personalization of health management strategies. Simulation results validate the effectiveness of the LLM-HAS framework, highlighting its potential as a groundbreaking approach for employing generative AI (GAI) to provide highly accurate and reliable alerts.
Related papers
- Leveraging the Power of Ensemble Learning for Secure Low Altitude Economy [64.39232788946173]
Low Altitude Economy (LAE) holds immense promise for enhancing societal well-being and driving economic growth.<n>This paper investigates ensemble learning for secure LAE, covering research focuses, solutions, and a case study.
arXiv Detail & Related papers (2026-02-07T23:15:58Z) - SoK: Privacy-aware LLM in Healthcare: Threat Model, Privacy Techniques, Challenges and Recommendations [0.6533091401094101]
Large Language Models (LLMs) are increasingly adopted in healthcare to support clinical decision-making and enhance patient care.<n>This work examines the evolving threat landscape across the three core LLM phases: Data preprocessing, Fine-tuning, and Inference within realistic healthcare settings.<n>We present a detailed threat model that characterizes adversaries, capabilities, and attack surfaces at each phase, and we systematize how existing privacy-preserving techniques (PPTs) attempt to mitigate these vulnerabilities.
arXiv Detail & Related papers (2026-01-15T02:28:57Z) - Enhancing the Medical Context-Awareness Ability of LLMs via Multifaceted Self-Refinement Learning [49.559151128219725]
Large language models (LLMs) have shown great promise in the medical domain, achieving strong performance on several benchmarks.<n>However, they continue to underperform in real-world medical scenarios, which often demand stronger context-awareness.<n>We propose Multifaceted Self-Refinement (MuSeR), a data-driven approach that enhances LLMs' context-awareness along three key facets.
arXiv Detail & Related papers (2025-11-13T08:13:23Z) - A Principle-based Framework for the Development and Evaluation of Large Language Models for Health and Wellness [7.135227672247848]
This paper describes the development of the Fitbit Insights explorer, a large language model (LLM)-powered system designed to help users interpret their personal health data.<n>It introduces the safety, helpfulness, accuracy, relevance, and personalization (SHARP) principle-based framework.<n>It integrates comprehensive evaluation techniques including human evaluation by generalists and clinical specialists, autorater assessments, and adversarial testing.
arXiv Detail & Related papers (2025-10-23T06:54:33Z) - Beyond Reactive Safety: Risk-Aware LLM Alignment via Long-Horizon Simulation [69.63626052852153]
We propose a proof-of-concept framework that projects how model-generated advice could propagate through societal systems.<n>We also introduce a dataset of 100 indirect harm scenarios, testing models' ability to foresee adverse, non-obvious outcomes from seemingly harmless user prompts.
arXiv Detail & Related papers (2025-06-26T02:28:58Z) - Large Language Models for Cancer Communication: Evaluating Linguistic Quality, Safety, and Accessibility in Generative AI [0.40744588528519854]
Effective communication about breast and cervical cancers remains a persistent health challenge.<n>This study evaluates the capabilities and limitations of Large Language Models (LLMs) in generating accurate, safe, and accessible cancer-related information.
arXiv Detail & Related papers (2025-05-15T16:23:21Z) - Towards Artificial General or Personalized Intelligence? A Survey on Foundation Models for Personalized Federated Intelligence [59.498447610998525]
The rise of large language models (LLMs) has reshaped the artificial intelligence landscape.<n>This paper focuses on adapting these powerful models to meet the specific needs and preferences of users while maintaining privacy and efficiency.<n>We propose personalized federated intelligence (PFI), which integrates the privacy-preserving advantages of federated learning with the zero-shot generalization capabilities of FMs.
arXiv Detail & Related papers (2025-05-11T08:57:53Z) - Leveraging LLMs for Mental Health: Detection and Recommendations from Social Discussions [0.5249805590164902]
We propose a comprehensive framework that leverages Natural Language Processing (NLP) and Generative AI techniques to identify and assess mental health disorders.
We use rule-based labeling methods as well as advanced pre-trained NLP models to extract nuanced semantic features from the data.
We fine-tune domain-adapted and generic pre-trained NLP models based on predictions from specialized Large Language Models (LLMs) to improve classification accuracy.
arXiv Detail & Related papers (2025-03-03T11:48:01Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - SouLLMate: An Application Enhancing Diverse Mental Health Support with Adaptive LLMs, Prompt Engineering, and RAG Techniques [9.146311285410631]
Mental health issues significantly impact individuals' daily lives, yet many do not receive the help they need even with available online resources.
This study aims to provide diverse, accessible, stigma-free, personalized, and real-time mental health support through cutting-edge AI technologies.
arXiv Detail & Related papers (2024-10-17T22:04:32Z) - Bi-Factorial Preference Optimization: Balancing Safety-Helpfulness in Language Models [94.39278422567955]
Fine-tuning large language models (LLMs) on human preferences has proven successful in enhancing their capabilities.
However, ensuring the safety of LLMs during the fine-tuning remains a critical concern.
We propose a supervised learning framework called Bi-Factorial Preference Optimization (BFPO) to address this issue.
arXiv Detail & Related papers (2024-08-27T17:31:21Z) - IntelliCare: Improving Healthcare Analysis with Variance-Controlled Patient-Level Knowledge from Large Language Models [14.709233593021281]
The integration of external knowledge from Large Language Models (LLMs) presents a promising avenue for improving healthcare predictions.
We propose IntelliCare, a novel framework that leverages LLMs to provide high-quality patient-level external knowledge.
IntelliCare identifies patient cohorts and employs task-relevant statistical information to augment LLM understanding and generation.
arXiv Detail & Related papers (2024-08-23T13:56:00Z) - Large Language Model as a Catalyst: A Paradigm Shift in Base Station Siting Optimization [62.16747639440893]
Large language models (LLMs) and their associated technologies advance, particularly in the realms of prompt engineering and agent engineering.
Our proposed framework incorporates retrieval-augmented generation (RAG) to enhance the system's ability to acquire domain-specific knowledge and generate solutions.
arXiv Detail & Related papers (2024-08-07T08:43:32Z) - Graph-Augmented LLMs for Personalized Health Insights: A Case Study in Sleep Analysis [2.303486126296845]
Large Language Models (LLMs) have shown promise in delivering interactive health advice.
Traditional methods like Retrieval-Augmented Generation (RAG) and fine-tuning often fail to fully utilize the complex, multi-dimensional, and temporally relevant data.
This paper introduces a graph-augmented LLM framework designed to significantly enhance the personalization and clarity of health insights.
arXiv Detail & Related papers (2024-06-24T01:22:54Z) - Trustworthy and Practical AI for Healthcare: A Guided Deferral System with Large Language Models [1.2281181385434294]
Large language models (LLMs) offer a valuable technology for various applications in healthcare.
Their tendency to hallucinate and the existing reliance on proprietary systems pose challenges in environments concerning critical decision-making.
This paper presents a novel HAIC guided deferral system that can simultaneously parse medical reports for disorder classification, and defer uncertain predictions with intelligent guidance to humans.
arXiv Detail & Related papers (2024-06-11T12:41:54Z) - Large Language Models and User Trust: Consequence of Self-Referential Learning Loop and the Deskilling of Healthcare Professionals [1.6574413179773761]
This paper explores the evolving relationship between clinician trust in LLMs and the impact of data sources from predominantly human-generated to AI-generated content.
One of the primary concerns identified is the potential feedback loop that arises as LLMs become more reliant on their outputs for learning.
A key takeaway from our investigation is the critical role of user expertise and the necessity for a discerning approach to trusting and validating LLM outputs.
arXiv Detail & Related papers (2024-03-15T04:04:45Z) - Generative AI for Secure Physical Layer Communications: A Survey [80.0638227807621]
Generative Artificial Intelligence (GAI) stands at the forefront of AI innovation, demonstrating rapid advancement and unparalleled proficiency in generating diverse content.
In this paper, we offer an extensive survey on the various applications of GAI in enhancing security within the physical layer of communication networks.
We delve into the roles of GAI in addressing challenges of physical layer security, focusing on communication confidentiality, authentication, availability, resilience, and integrity.
arXiv Detail & Related papers (2024-02-21T06:22:41Z) - Generative AI-Driven Human Digital Twin in IoT-Healthcare: A Comprehensive Survey [53.691704671844406]
The Internet of things (IoT) can significantly enhance the quality of human life, specifically in healthcare.
The human digital twin (HDT) is proposed as an innovative paradigm that can comprehensively characterize the replication of the individual human body.
HDT is envisioned to empower IoT-healthcare beyond the application of healthcare monitoring by acting as a versatile and vivid human digital testbed.
Recently, generative artificial intelligence (GAI) may be a promising solution because it can leverage advanced AI algorithms to automatically create, manipulate, and modify valuable while diverse data.
arXiv Detail & Related papers (2024-01-22T03:17:41Z) - Benefits and Harms of Large Language Models in Digital Mental Health [40.02859683420844]
Large language models (LLMs) show promise in leading digital mental health to uncharted territory.
This article presents contemporary perspectives on the opportunities and risks posed by LLMs in the design, development, and implementation of digital mental health tools.
arXiv Detail & Related papers (2023-11-07T14:11:10Z) - Blockchain-empowered Federated Learning for Healthcare Metaverses:
User-centric Incentive Mechanism with Optimal Data Freshness [66.3982155172418]
We first design a user-centric privacy-preserving framework based on decentralized Federated Learning (FL) for healthcare metaverses.
We then utilize Age of Information (AoI) as an effective data-freshness metric and propose an AoI-based contract theory model under Prospect Theory (PT) to motivate sensing data sharing.
arXiv Detail & Related papers (2023-07-29T12:54:03Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.