MedSEBA: Synthesizing Evidence-Based Answers Grounded in Evolving Medical Literature
- URL: http://arxiv.org/abs/2509.00414v1
- Date: Sat, 30 Aug 2025 08:43:09 GMT
- Title: MedSEBA: Synthesizing Evidence-Based Answers Grounded in Evolving Medical Literature
- Authors: Juraj Vladika, Florian Matthes,
- Abstract summary: We introduce MedSEBA, an interactive AI-powered system for synthesizing evidence-based answers to medical questions.<n>The answers consist of key points and arguments, which can be traced back to respective studies.<n>Our user study revealed that medical experts and lay users find the system usable and helpful.
- Score: 25.37522195584869
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the digital age, people often turn to the Internet in search of medical advice and recommendations. With the increasing volume of online content, it has become difficult to distinguish reliable sources from misleading information. Similarly, millions of medical studies are published every year, making it challenging for researchers to keep track of the latest scientific findings. These evolving studies can reach differing conclusions, which is not reflected in traditional search tools. To address these challenges, we introduce MedSEBA, an interactive AI-powered system for synthesizing evidence-based answers to medical questions. It utilizes the power of Large Language Models to generate coherent and expressive answers, but grounds them in trustworthy medical studies dynamically retrieved from the research database PubMed. The answers consist of key points and arguments, which can be traced back to respective studies. Notably, the platform also provides an overview of the extent to which the most relevant studies support or refute the given medical claim, and a visualization of how the research consensus evolved through time. Our user study revealed that medical experts and lay users find the system usable and helpful, and the provided answers trustworthy and informative. This makes the system well-suited for both everyday health questions and advanced research insights.
Related papers
- Introducing Answered with Evidence -- a framework for evaluating whether LLM responses to biomedical questions are founded in evidence [1.3250161978024673]
Large language models (LLMs) for biomedical question answering raise concerns about the accuracy and evidentiary support of their responses.<n>We analyzed thousands of physician-submitted questions using a comparative pipeline that included: (1) Alexandria, fka the Atropos Evidence Library, a retrieval-augmented generation (RAG) system based on novel observational studies, and (2) two PubMed-based retrieval-augmented systems (System and Perplexity)<n>We found that PubMed-based systems provided evidence-supported answers for approximately 44% of questions, while the novel evidence source did so for about 50%.
arXiv Detail & Related papers (2025-06-30T18:00:52Z) - Decide less, communicate more: On the construct validity of end-to-end fact-checking in medicine [59.604255567812714]
We show how experts verify real claims from social media by synthesizing medical evidence.<n>Difficulties connecting claims in the wild to scientific evidence in the form of clinical trials.<n>We argue that fact-checking should be approached and evaluated as an interactive communication problem.
arXiv Detail & Related papers (2025-06-25T22:58:08Z) - Retrieval-augmented systems can be dangerous medical communicators [21.371504193281226]
Patients have long sought health information online, and increasingly, they are turning to generative AI to answer their health-related queries.<n>Retrieval-augmented generation and citation grounding have been widely promoted as methods to reduce hallucinations and improve the accuracy of AI-generated responses.<n>This paper argues that even when these methods produce literally accurate content drawn from source documents sans hallucinations, they can still be highly misleading.
arXiv Detail & Related papers (2025-02-18T01:57:02Z) - Identifying and Aligning Medical Claims Made on Social Media with Medical Evidence [0.12277343096128711]
We study three core tasks: identifying medical claims, extracting medical vocabulary from these claims, and retrieving evidence relevant to those identified medical claims.
We propose a novel system that can generate synthetic medical claims to aid each of these core tasks.
arXiv Detail & Related papers (2024-05-18T07:50:43Z) - Developing ChatGPT for Biology and Medicine: A Complete Review of
Biomedical Question Answering [25.569980942498347]
ChatGPT explores a strategic blueprint of question answering (QA) in delivering medical diagnosis, treatment recommendations, and other healthcare support.
This is achieved through the increasing incorporation of medical domain data via natural language processing (NLP) and multimodal paradigms.
arXiv Detail & Related papers (2024-01-15T07:21:16Z) - De-identification of clinical free text using natural language
processing: A systematic review of current approaches [48.343430343213896]
Natural language processing has repeatedly demonstrated its feasibility in automating the de-identification process.
Our study aims to provide systematic evidence on how the de-identification of clinical free text has evolved in the last thirteen years.
arXiv Detail & Related papers (2023-11-28T13:20:41Z) - A Review on Knowledge Graphs for Healthcare: Resources, Applications, and Promises [59.4999994297993]
This comprehensive review aims to provide an overview of the current state of Healthcare Knowledge Graphs (HKGs)<n>We thoroughly analyzed existing literature on HKGs, covering their construction methodologies, utilization techniques, and applications.<n>The review highlights the potential of HKGs to significantly impact biomedical research and clinical practice.
arXiv Detail & Related papers (2023-06-07T21:51:56Z) - Reasoning with Language Model Prompting: A Survey [86.96133788869092]
Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications.
This paper provides a comprehensive survey of cutting-edge research on reasoning with language model prompting.
arXiv Detail & Related papers (2022-12-19T16:32:42Z) - Medical Question Understanding and Answering with Knowledge Grounding
and Semantic Self-Supervision [53.692793122749414]
We introduce a medical question understanding and answering system with knowledge grounding and semantic self-supervision.
Our system is a pipeline that first summarizes a long, medical, user-written question, using a supervised summarization loss.
The system first matches the summarized user question with an FAQ from a trusted medical knowledge base, and then retrieves a fixed number of relevant sentences from the corresponding answer document.
arXiv Detail & Related papers (2022-09-30T08:20:32Z) - Medical Visual Question Answering: A Survey [55.53205317089564]
Medical Visual Question Answering(VQA) is a combination of medical artificial intelligence and popular VQA challenges.
Given a medical image and a clinically relevant question in natural language, the medical VQA system is expected to predict a plausible and convincing answer.
arXiv Detail & Related papers (2021-11-19T05:55:15Z) - Medical Information Retrieval and Interpretation: A Question-Answer
based Interaction Model [7.990816079551592]
Internet has become a very powerful platform where diverse medical information are expressed daily.
Current search engines and recommendation systems still lack real time interactions that may provide more precise result generation.
This paper proposes an intelligent and interactive system tied up with the vast medical big data repository on the web.
arXiv Detail & Related papers (2021-01-24T07:01:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.