KNSE: A Knowledge-aware Natural Language Inference Framework for
Dialogue Symptom Status Recognition
- URL: http://arxiv.org/abs/2305.16833v1
- Date: Fri, 26 May 2023 11:23:26 GMT
- Title: KNSE: A Knowledge-aware Natural Language Inference Framework for
Dialogue Symptom Status Recognition
- Authors: Wei Chen, Shiqi Wei, Zhongyu Wei, Xuanjing Huang
- Abstract summary: We propose a novel framework called KNSE for symptom status recognition (SSR)
For each mentioned symptom in a dialogue window, we first generate knowledge about the symptom and hypothesis about status of the symptom, to form a (premise, knowledge, hypothesis) triplet.
The BERT model is then used to encode the triplet, which is further processed by modules including utterance aggregation, self-attention, cross-attention, and GRU to predict the symptom status.
- Score: 69.78432481474572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Symptom diagnosis in medical conversations aims to correctly extract both
symptom entities and their status from the doctor-patient dialogue. In this
paper, we propose a novel framework called KNSE for symptom status recognition
(SSR), where the SSR is formulated as a natural language inference (NLI) task.
For each mentioned symptom in a dialogue window, we first generate knowledge
about the symptom and hypothesis about status of the symptom, to form a
(premise, knowledge, hypothesis) triplet. The BERT model is then used to encode
the triplet, which is further processed by modules including utterance
aggregation, self-attention, cross-attention, and GRU to predict the symptom
status. Benefiting from the NLI formalization, the proposed framework can
encode more informative prior knowledge to better localize and track symptom
status, which can effectively improve the performance of symptom status
recognition. Preliminary experiments on Chinese medical dialogue datasets show
that KNSE outperforms previous competitive baselines and has advantages in
cross-disease and cross-symptom scenarios.
Related papers
- NeuroXVocal: Detection and Explanation of Alzheimer's Disease through Non-invasive Analysis of Picture-prompted Speech [4.815952991777717]
NeuroXVocal is a novel dual-component system that classifies and explains potential Alzheimer's Disease (AD) cases through speech analysis.
The classification component (Neuro) processes three distinct data streams: acoustic features capturing speech patterns and voice characteristics, textual features extracted from speech transcriptions, and precomputed embeddings representing linguistic patterns.
The explainability component (XVocal) implements a Retrieval-Augmented Generation (RAG) approach, leveraging Large Language Models combined with a domain-specific knowledge base of AD research literature.
arXiv Detail & Related papers (2025-02-14T12:09:49Z) - Detecting anxiety and depression in dialogues: a multi-label and explainable approach [5.635300481123079]
Anxiety and depression are the most common mental health issues worldwide, affecting a non-negligible part of the population.
In this work, an entirely novel system for the multi-label classification of anxiety and depression is proposed.
arXiv Detail & Related papers (2024-12-23T15:29:46Z) - CoAD: Automatic Diagnosis through Symptom and Disease Collaborative
Generation [37.25451059168202]
CoAD is a disease and symptom collaborative generation framework.
It incorporates several key innovations to improve automatic disease diagnosis.
It achieves an average 2.3% improvement over previous state-of-the-art results in automatic disease diagnosis.
arXiv Detail & Related papers (2023-07-17T07:24:55Z) - MedKLIP: Medical Knowledge Enhanced Language-Image Pre-Training in
Radiology [40.52487429030841]
We consider enhancing medical visual-language pre-training with domain-specific knowledge, by exploiting the paired image-text reports from the radiological daily practice.
First, unlike existing works that directly process the raw reports, we adopt a novel triplet extraction module to extract the medical-related information.
Second, we propose a novel triplet encoding module with entity translation by querying a knowledge base, to exploit the rich domain knowledge in medical field.
Third, we propose to use a Transformer-based fusion model for spatially aligning the entity description with visual signals at the image patch level, enabling the ability for medical diagnosis
arXiv Detail & Related papers (2023-01-05T18:55:09Z) - NeuralSympCheck: A Symptom Checking and Disease Diagnostic Neural Model
with Logic Regularization [59.15047491202254]
symptom checking systems inquire users for their symptoms and perform a rapid and affordable medical assessment of their condition.
We propose a new approach based on the supervised learning of neural models with logic regularization.
Our experiments show that the proposed approach outperforms the best existing methods in the accuracy of diagnosis when the number of diagnoses and symptoms is large.
arXiv Detail & Related papers (2022-06-02T07:57:17Z) - DxFormer: A Decoupled Automatic Diagnostic System Based on
Decoder-Encoder Transformer with Dense Symptom Representations [26.337392652262103]
A diagnosis-oriented dialogue system queries the patient's health condition and makes predictions about possible diseases through continuous interaction with the patient.
We propose a decoupled automatic diagnostic framework DxFormer, which divides the diagnosis process into two steps: symptom inquiry and disease diagnosis.
Our proposed model can effectively learn doctors' clinical experience and achieve the state-of-the-art results in terms of symptom recall and diagnostic accuracy.
arXiv Detail & Related papers (2022-05-08T01:52:42Z) - CogAlign: Learning to Align Textual Neural Representations to Cognitive
Language Processing Signals [60.921888445317705]
We propose a CogAlign approach to integrate cognitive language processing signals into natural language processing models.
We show that CogAlign achieves significant improvements with multiple cognitive features over state-of-the-art models on public datasets.
arXiv Detail & Related papers (2021-06-10T07:10:25Z) - Pose-based Body Language Recognition for Emotion and Psychiatric Symptom
Interpretation [75.3147962600095]
We propose an automated framework for body language based emotion recognition starting from regular RGB videos.
In collaboration with psychologists, we extend the framework for psychiatric symptom prediction.
Because a specific application domain of the proposed framework may only supply a limited amount of data, the framework is designed to work on a small training set.
arXiv Detail & Related papers (2020-10-30T18:45:16Z) - UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual
Embeddings Using the Unified Medical Language System Metathesaurus [73.86656026386038]
We introduce UmlsBERT, a contextual embedding model that integrates domain knowledge during the pre-training process.
By applying these two strategies, UmlsBERT can encode clinical domain knowledge into word embeddings and outperform existing domain-specific models.
arXiv Detail & Related papers (2020-10-20T15:56:31Z) - Hierarchical Reinforcement Learning for Automatic Disease Diagnosis [52.111516253474285]
We propose to integrate a hierarchical policy structure of two levels into the dialogue systemfor policy learning.
The proposed policy structure is capable to deal with diagnosis problem including large number of diseases and symptoms.
arXiv Detail & Related papers (2020-04-29T15:02:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.