Local Explanations for Clinical Search Engine results
- URL: http://arxiv.org/abs/2110.12891v1
- Date: Tue, 19 Oct 2021 18:48:28 GMT
- Title: Local Explanations for Clinical Search Engine results
- Authors: Edeline Contempr\'e, Zolt\'an Szl\'avik, Majid Mohammadi, Erick
Velazquez, Annette ten Teije, Ilaria Tiddi
- Abstract summary: Engine generates features from clinical trials using by using a knowledge graph, clinical trial data and additional medical resources.
We compute an explainability score for each of the retrieved items, according to which the items can be ranked.
Experiments validated by medical professionals suggest that the proposed methodology induces trust in targeted as well as in non-targeted users.
- Score: 6.31241529629348
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Health care professionals rely on treatment search engines to efficiently
find adequate clinical trials and early access programs for their patients.
However, doctors lose trust in the system if its underlying processes are
unclear and unexplained. In this paper, a model-agnostic explainable method is
developed to provide users with further information regarding the reasons why a
clinical trial is retrieved in response to a query. To accomplish this, the
engine generates features from clinical trials using by using a knowledge
graph, clinical trial data and additional medical resources. and a
crowd-sourcing methodology is used to determine their importance. Grounded on
the proposed methodology, the rationale behind retrieving the clinical trials
is explained in layman's terms so that healthcare processionals can
effortlessly perceive them. In addition, we compute an explainability score for
each of the retrieved items, according to which the items can be ranked. The
experiments validated by medical professionals suggest that the proposed
methodology induces trust in targeted as well as in non-targeted users, and
provide them with reliable explanations and ranking of retrieved items.
Related papers
- Utilizing ChatGPT to Enhance Clinical Trial Enrollment [2.3551878971309947]
We propose an automated approach that leverages ChatGPT, a large language model, to extract patient-related information from unstructured clinical notes.
Our empirical evaluation, conducted on two benchmark retrieval collections, shows improved retrieval performance compared to existing approaches.
These findings highlight the potential use of ChatGPT to enhance clinical trial enrollment while ensuring the quality of medical service and minimizing direct risks to patients.
arXiv Detail & Related papers (2023-06-03T10:54:23Z) - Self-Verification Improves Few-Shot Clinical Information Extraction [73.6905567014859]
Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning.
They still struggle with issues regarding accuracy and interpretability, especially in mission-critical domains such as health.
Here, we explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs.
arXiv Detail & Related papers (2023-05-30T22:05:11Z) - Applying unsupervised keyphrase methods on concepts extracted from
discharge sheets [7.102620843620572]
It is necessary to identify the section in which each content is recorded and also to identify key concepts to extract meaning from clinical texts.
In this study, these challenges have been addressed by using clinical natural language processing techniques.
A set of popular unsupervised key phrase extraction methods has been verified and evaluated.
arXiv Detail & Related papers (2023-03-15T20:55:25Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - This Patient Looks Like That Patient: Prototypical Networks for
Interpretable Diagnosis Prediction from Clinical Text [56.32427751440426]
In clinical practice such models must not only be accurate, but provide doctors with interpretable and helpful results.
We introduce ProtoPatient, a novel method based on prototypical networks and label-wise attention.
We evaluate the model on two publicly available clinical datasets and show that it outperforms existing baselines.
arXiv Detail & Related papers (2022-10-16T10:12:07Z) - Clinical trial site matching with improved diversity using fair policy
learning [56.01170456417214]
We learn a model that maps a clinical trial description to a ranked list of potential trial sites.
Unlike existing fairness frameworks, the group membership of each trial site is non-binary.
We propose fairness criteria based on demographic parity to address such a multi-group membership scenario.
arXiv Detail & Related papers (2022-04-13T16:35:28Z) - Explainable Deep Learning in Healthcare: A Methodological Survey from an
Attribution View [36.025217954247125]
We introduce the methods for interpretability in depth and comprehensively as a methodological reference for future researchers or clinical practitioners.
We discuss how these methods have been adapted and applied to healthcare problems and how they can help physicians better understand these data-driven technologies.
arXiv Detail & Related papers (2021-12-05T17:12:53Z) - Semi-Supervised Variational Reasoning for Medical Dialogue Generation [70.838542865384]
Two key characteristics are relevant for medical dialogue generation: patient states and physician actions.
We propose an end-to-end variational reasoning approach to medical dialogue generation.
A physician policy network composed of an action-classifier and two reasoning detectors is proposed for augmented reasoning ability.
arXiv Detail & Related papers (2021-05-13T04:14:35Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - AI Driven Knowledge Extraction from Clinical Practice Guidelines:
Turning Research into Practice [2.803896166632835]
Clinical Practice Guidelines (CPGs) represent the foremost methodology for sharing state-of-the-art research findings in the healthcare domain with medical practitioners.
However, extracting relevant knowledge from the plethora of CPGs is not feasible for already burdened healthcare professionals.
This research presents a novel methodology for knowledge extraction from CPGs to reduce the gap and turn the latest research findings into clinical practice.
arXiv Detail & Related papers (2020-12-10T07:23:02Z) - Understanding Clinical Trial Reports: Extracting Medical Entities and
Their Relations [33.30381080306156]
Medical experts must manually extract information from articles to inform decision-making.
We consider the end-to-end task of both (a) extracting treatments and outcomes from full-text articles describing clinical trials (entity identification) and (b) inferring the reported results for the former with respect to the latter.
arXiv Detail & Related papers (2020-10-07T17:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.