Reformulating NLP tasks to Capture Longitudinal Manifestation of
Language Disorders in People with Dementia
- URL: http://arxiv.org/abs/2310.09897v1
- Date: Sun, 15 Oct 2023 17:58:47 GMT
- Title: Reformulating NLP tasks to Capture Longitudinal Manifestation of
Language Disorders in People with Dementia
- Authors: Dimitris Gkoumas, Matthew Purver, Maria Liakata
- Abstract summary: We learn linguistic disorder patterns by making use of a moderately-sized pre-trained language model.
We then use the probability estimates from the best model to construct digital linguistic markers.
Our proposed linguistic disorder markers provide useful insights into gradual language impairment associated with disease progression.
- Score: 18.964022118823532
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dementia is associated with language disorders which impede communication.
Here, we automatically learn linguistic disorder patterns by making use of a
moderately-sized pre-trained language model and forcing it to focus on
reformulated natural language processing (NLP) tasks and associated linguistic
patterns. Our experiments show that NLP tasks that encapsulate contextual
information and enhance the gradient signal with linguistic patterns benefit
performance. We then use the probability estimates from the best model to
construct digital linguistic markers measuring the overall quality in
communication and the intensity of a variety of language disorders. We
investigate how the digital markers characterize dementia speech from a
longitudinal perspective. We find that our proposed communication marker is
able to robustly and reliably characterize the language of people with
dementia, outperforming existing linguistic approaches; and shows external
validity via significant correlation with clinical markers of behaviour.
Finally, our proposed linguistic disorder markers provide useful insights into
gradual language impairment associated with disease progression.
Related papers
- Profiling Patient Transcript Using Large Language Model Reasoning Augmentation for Alzheimer's Disease Detection [4.961581278723015]
Alzheimer's disease (AD) stands as the predominant cause of dementia, characterized by a gradual decline in speech and language capabilities.
Recent deep-learning advancements have facilitated automated AD detection through spontaneous speech.
Common transcript-based detection methods directly model text patterns in each utterance without a global view of the patient's linguistic characteristics.
arXiv Detail & Related papers (2024-09-19T07:58:07Z) - Comparing Hallucination Detection Metrics for Multilingual Generation [62.97224994631494]
This paper assesses how well various factual hallucination detection metrics identify hallucinations in generated biographical summaries across languages.
We compare how well automatic metrics correlate to each other and whether they agree with human judgments of factuality.
Our analysis reveals that while the lexical metrics are ineffective, NLI-based metrics perform well, correlating with human annotations in many settings and often outperforming supervised models.
arXiv Detail & Related papers (2024-02-16T08:10:34Z) - Language Generation from Brain Recordings [68.97414452707103]
We propose a generative language BCI that utilizes the capacity of a large language model and a semantic brain decoder.
The proposed model can generate coherent language sequences aligned with the semantic content of visual or auditory language stimuli.
Our findings demonstrate the potential and feasibility of employing BCIs in direct language generation.
arXiv Detail & Related papers (2023-11-16T13:37:21Z) - Quantifying the Dialect Gap and its Correlates Across Languages [69.18461982439031]
This work will lay the foundation for furthering the field of dialectal NLP by laying out evident disparities and identifying possible pathways for addressing them through mindful data collection.
arXiv Detail & Related papers (2023-10-23T17:42:01Z) - A Digital Language Coherence Marker for Monitoring Dementia [14.580879594539859]
We propose methods to capture language coherence as a cost-effective, human-interpretable digital marker.
We compare language coherence patterns between people with dementia and healthy controls.
The coherence marker shows a significant difference between people with mild cognitive impairment, those with Alzheimer's Disease and healthy controls.
arXiv Detail & Related papers (2023-10-14T17:10:19Z) - Assessing Language Disorders using Artificial Intelligence: a Paradigm
Shift [0.13393465195776774]
Speech, language, and communication deficits are present in most neurodegenerative syndromes.
We argue that using machine learning methodologies, natural language processing, and modern artificial intelligence (AI) for Language Assessment is an improvement over conventional manual assessment.
arXiv Detail & Related papers (2023-05-31T17:20:45Z) - Leveraging Pretrained Representations with Task-related Keywords for
Alzheimer's Disease Detection [69.53626024091076]
Alzheimer's disease (AD) is particularly prominent in older adults.
Recent advances in pre-trained models motivate AD detection modeling to shift from low-level features to high-level representations.
This paper presents several efficient methods to extract better AD-related cues from high-level acoustic and linguistic features.
arXiv Detail & Related papers (2023-03-14T16:03:28Z) - Semantic Coherence Markers for the Early Diagnosis of the Alzheimer
Disease [0.0]
Perplexity was originally conceived as an information-theoretic measure to assess how much a given language model is suited to predict a text sequence.
We employed language models as diverse as N-grams, from 2-grams to 5-grams, and GPT-2, a transformer-based language model.
Best performing models achieved full accuracy and F-score (1.00 in both precision/specificity and recall/sensitivity) in categorizing subjects from both the AD class and control subjects.
arXiv Detail & Related papers (2023-02-02T11:40:16Z) - GPT-D: Inducing Dementia-related Linguistic Anomalies by Deliberate
Degradation of Artificial Neural Language Models [7.8430387435520625]
We propose a novel method by which a Transformer DL model (GPT-2) pre-trained on general English text is paired with an artificially degraded version of itself (GPT-D)
This technique approaches state-of-the-art performance on text data from a widely used "Cookie Theft" picture description task, and unlike established alternatives also generalizes well to spontaneous conversations.
Our study is a step toward better understanding of the relationships between the inner workings of generative neural language models, the language that they produce, and the deleterious effects of dementia on human speech and language characteristics.
arXiv Detail & Related papers (2022-03-25T00:25:42Z) - CogAlign: Learning to Align Textual Neural Representations to Cognitive
Language Processing Signals [60.921888445317705]
We propose a CogAlign approach to integrate cognitive language processing signals into natural language processing models.
We show that CogAlign achieves significant improvements with multiple cognitive features over state-of-the-art models on public datasets.
arXiv Detail & Related papers (2021-06-10T07:10:25Z) - On the Importance of Word Order Information in Cross-lingual Sequence
Labeling [80.65425412067464]
Cross-lingual models that fit into the word order of the source language might fail to handle target languages.
We investigate whether making models insensitive to the word order of the source language can improve the adaptation performance in target languages.
arXiv Detail & Related papers (2020-01-30T03:35:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.