Understanding the concerns and choices of public when using large
language models for healthcare
- URL: http://arxiv.org/abs/2401.09090v1
- Date: Wed, 17 Jan 2024 09:51:32 GMT
- Title: Understanding the concerns and choices of public when using large
language models for healthcare
- Authors: Yunpeng Xiao, Kyrie Zhixuan Zhou, Yueqing Liang, Kai Shu
- Abstract summary: Large language models (LLMs) have shown their potential in biomedical fields.
How the public uses them for healthcare purposes such as medical Q&A, self-diagnosis, and daily healthcare information seeking is under-investigated.
- Score: 18.906110107170697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have shown their potential in biomedical fields.
However, how the public uses them for healthcare purposes such as medical Q\&A,
self-diagnosis, and daily healthcare information seeking is under-investigated.
In this paper, we adopt a mixed-methods approach, including surveys (N=167) and
interviews (N=17) to investigate how and why the public uses LLMs for
healthcare. LLMs as a healthcare tool have gained popularity, and are often
used in combination with other information channels such as search engines and
online health communities to optimize information quality. LLMs provide more
accurate information and a more convenient interaction/service model compared
to traditional channels. LLMs also do a better job of reducing misinformation,
especially in daily healthcare questions. Doctors using LLMs for diagnosis is
less acceptable than for auxiliary work such as writing medical records. Based
on the findings, we reflect on the ethical and effective use of LLMs for
healthcare and propose future research directions.
Related papers
- Search Engines, LLMs or Both? Evaluating Information Seeking Strategies for Answering Health Questions [3.8984586307450093]
We compare different web search engines, Large Language Models (LLMs) and retrieval-augmented (RAG) approaches.
We observed that the quality of webpages potentially responding to a health question does not decline as we navigate further down the ranked lists.
According to our evaluation, web engines are less accurate than LLMs in finding correct answers to health questions.
arXiv Detail & Related papers (2024-07-17T10:40:39Z) - Towards Training A Chinese Large Language Model for Anesthesiology [37.44529879903248]
We introduce a Chinese Anesthesia model built upon existing medical large language models, e.g., Llama.
Hypnos' contributions have three aspects: 1) The data, such as utilizing Self-Instruct, acquired from current LLMs likely includes inaccuracies.
Hypnos employs a general-to-specific training strategy that starts by fine-tuning LLMs using the general medicine data and subsequently improving the fine-tuned LLMs using data specifically from Anesthesiology.
arXiv Detail & Related papers (2024-03-05T07:53:49Z) - Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs [60.40396361115776]
This paper introduces a novel collaborative approach, namely SlimPLM, that detects missing knowledge in large language models (LLMs) with a slim proxy model.
We employ a proxy model which has far fewer parameters, and take its answers as answers.
Heuristic answers are then utilized to predict the knowledge required to answer the user question, as well as the known and unknown knowledge within the LLM.
arXiv Detail & Related papers (2024-02-19T11:11:08Z) - Large Language Model Distilling Medication Recommendation Model [61.89754499292561]
We harness the powerful semantic comprehension and input-agnostic characteristics of Large Language Models (LLMs)
Our research aims to transform existing medication recommendation methodologies using LLMs.
To mitigate this, we have developed a feature-level knowledge distillation technique, which transfers the LLM's proficiency to a more compact model.
arXiv Detail & Related papers (2024-02-05T08:25:22Z) - ChiMed-GPT: A Chinese Medical Large Language Model with Full Training Regime and Better Alignment to Human Preferences [51.66185471742271]
We propose ChiMed-GPT, a benchmark LLM designed explicitly for Chinese medical domain.
ChiMed-GPT undergoes a comprehensive training regime with pre-training, SFT, and RLHF.
We analyze possible biases through prompting ChiMed-GPT to perform attitude scales regarding discrimination of patients.
arXiv Detail & Related papers (2023-11-10T12:25:32Z) - A Survey of Large Language Models in Medicine: Progress, Application, and Challenge [85.09998659355038]
Large language models (LLMs) have received substantial attention due to their capabilities for understanding and generating human language.
This review aims to provide a detailed overview of the development and deployment of LLMs in medicine.
arXiv Detail & Related papers (2023-11-09T02:55:58Z) - A Survey of Large Language Models for Healthcare: from Data, Technology, and Applications to Accountability and Ethics [32.10937977924507]
The utilization of large language models (LLMs) in the Healthcare domain has generated both excitement and concern.
This survey outlines the capabilities of the currently developed LLMs for Healthcare and explicates their development process.
arXiv Detail & Related papers (2023-10-09T13:15:23Z) - Augmenting Black-box LLMs with Medical Textbooks for Clinical Question
Answering [54.13933019557655]
We present a system called LLMs Augmented with Medical Textbooks (LLM-AMT)
LLM-AMT integrates authoritative medical textbooks into the LLMs' framework using plug-and-play modules.
We found that medical textbooks as a retrieval corpus is proven to be a more effective knowledge database than Wikipedia in the medical domain.
arXiv Detail & Related papers (2023-09-05T13:39:38Z) - MedAlign: A Clinician-Generated Dataset for Instruction Following with
Electronic Medical Records [60.35217378132709]
Large language models (LLMs) can follow natural language instructions with human-level fluency.
evaluating LLMs on realistic text generation tasks for healthcare remains challenging.
We introduce MedAlign, a benchmark dataset of 983 natural language instructions for EHR data.
arXiv Detail & Related papers (2023-08-27T12:24:39Z) - Considerations for health care institutions training large language
models on electronic health records [7.048517095805301]
Large language models (LLMs) like ChatGPT have excited scientists across fields.
In medicine, one source of excitement is the potential applications of LLMs trained on electronic health record ( EHR) data.
But there are tough questions we must first answer if health care institutions are interested in having LLMs trained on their own data.
arXiv Detail & Related papers (2023-08-24T00:09:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.