Bigger But Not Better: Small Neural Language Models Outperform Large Language Models in Detection of Thought Disorder
- URL: http://arxiv.org/abs/2503.20103v1
- Date: Tue, 25 Mar 2025 22:55:58 GMT
- Title: Bigger But Not Better: Small Neural Language Models Outperform Large Language Models in Detection of Thought Disorder
- Authors: Changye Li, Weizhe Xu, Serguei Pakhomov, Ellen Bradley, Dror Ben-Zeev, Trevor Cohen,
- Abstract summary: We investigate whether smaller neural language models can serve as effective alternatives for detecting positive formal thought disorder.<n>Surprisingly, our results show that smaller models are more sensitive to linguistic differences associated with formal thought disorder than their larger counterparts.
- Score: 7.585589727435719
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Disorganized thinking is a key diagnostic indicator of schizophrenia-spectrum disorders. Recently, clinical estimates of the severity of disorganized thinking have been shown to correlate with measures of how difficult speech transcripts would be for large language models (LLMs) to predict. However, LLMs' deployment challenges -- including privacy concerns, computational and financial costs, and lack of transparency of training data -- limit their clinical utility. We investigate whether smaller neural language models can serve as effective alternatives for detecting positive formal thought disorder, using the same sliding window based perplexity measurements that proved effective with larger models. Surprisingly, our results show that smaller models are more sensitive to linguistic differences associated with formal thought disorder than their larger counterparts. Detection capability declines beyond a certain model size and context length, challenging the common assumption of ``bigger is better'' for LLM-based applications. Our findings generalize across audio diaries and clinical interview speech samples from individuals with psychotic symptoms, suggesting a promising direction for developing efficient, cost-effective, and privacy-preserving screening tools that can be deployed in both clinical and naturalistic settings.
Related papers
- Medical large language models are easily distracted [0.8211696054238238]
Large language models (LLMs) have the potential to transform medicine, but real-world clinical scenarios contain extraneous information that can hinder performance.
We developed MedDistractQA, a benchmark using USMLE-style questions embedded with simulated real-world distractions.
arXiv Detail & Related papers (2025-04-01T21:34:01Z) - DECT: Harnessing LLM-assisted Fine-Grained Linguistic Knowledge and Label-Switched and Label-Preserved Data Generation for Diagnosis of Alzheimer's Disease [13.38075448636078]
Alzheimer's Disease (AD) is an irreversible neurodegenerative disease affecting 50 million people worldwide.<n>Language impairment is one of the earliest signs of cognitive decline, which can be used to discriminate AD patients from normal control individuals.<n>Patient-interviewer dialogues may be used to detect such impairments, but they are often mixed with ambiguous, noisy, and irrelevant information.
arXiv Detail & Related papers (2025-02-06T04:00:25Z) - LlaMADRS: Prompting Large Language Models for Interview-Based Depression Assessment [75.44934940580112]
This study introduces LlaMADRS, a novel framework leveraging open-source Large Language Models (LLMs) to automate depression severity assessment.<n>We employ a zero-shot prompting strategy with carefully designed cues to guide the model in interpreting and scoring transcribed clinical interviews.<n>Our approach, tested on 236 real-world interviews, demonstrates strong correlations with clinician assessments.
arXiv Detail & Related papers (2025-01-07T08:49:04Z) - Devising a Set of Compact and Explainable Spoken Language Feature for Screening Alzheimer's Disease [52.46922921214341]
Alzheimer's disease (AD) has become one of the most significant health challenges in an aging society.<n>We devised an explainable and effective feature set that leverages the visual capabilities of a large language model (LLM) and the Term Frequency-Inverse Document Frequency (TF-IDF) model.<n>Our new features can be well explained and interpreted step by step which enhance the interpretability of automatic AD screening.
arXiv Detail & Related papers (2024-11-28T05:23:22Z) - AutoElicit: Using Large Language Models for Expert Prior Elicitation in Predictive Modelling [53.54623137152208]
We introduce AutoElicit to extract knowledge from large language models and construct priors for predictive models.<n>We show these priors are informative and can be refined using natural language.<n>We find that AutoElicit yields priors that can substantially reduce error over uninformative priors, using fewer labels, and consistently outperform in-context learning.
arXiv Detail & Related papers (2024-11-26T10:13:39Z) - Generative causal testing to bridge data-driven models and scientific theories in language neuroscience [82.995061475971]
We present generative causal testing (GCT), a framework for generating concise explanations of language selectivity in the brain.<n>We show that GCT can dissect fine-grained differences between brain areas with similar functional selectivity.
arXiv Detail & Related papers (2024-10-01T15:57:48Z) - SemioLLM: Assessing Large Language Models for Semiological Analysis in Epilepsy Research [45.2233252981348]
Large Language Models have shown promising results in their ability to encode general medical knowledge.
We test the ability of state-of-the-art LLMs to leverage their internal knowledge and reasoning for epilepsy diagnosis.
arXiv Detail & Related papers (2024-07-03T11:02:12Z) - Language Generation from Brain Recordings [68.97414452707103]
We propose a generative language BCI that utilizes the capacity of a large language model and a semantic brain decoder.
The proposed model can generate coherent language sequences aligned with the semantic content of visual or auditory language stimuli.
Our findings demonstrate the potential and feasibility of employing BCIs in direct language generation.
arXiv Detail & Related papers (2023-11-16T13:37:21Z) - Fine-tuning Language Models for Factuality [96.5203774943198]
Large pre-trained language models (LLMs) have led to their widespread use, sometimes even as a replacement for traditional search engines.
Yet language models are prone to making convincing but factually inaccurate claims, often referred to as 'hallucinations'
In this work, we fine-tune language models to be more factual, without human labeling.
arXiv Detail & Related papers (2023-11-14T18:59:15Z) - Reformulating NLP tasks to Capture Longitudinal Manifestation of
Language Disorders in People with Dementia [18.964022118823532]
We learn linguistic disorder patterns by making use of a moderately-sized pre-trained language model.
We then use the probability estimates from the best model to construct digital linguistic markers.
Our proposed linguistic disorder markers provide useful insights into gradual language impairment associated with disease progression.
arXiv Detail & Related papers (2023-10-15T17:58:47Z) - Leveraging Pretrained Representations with Task-related Keywords for
Alzheimer's Disease Detection [69.53626024091076]
Alzheimer's disease (AD) is particularly prominent in older adults.
Recent advances in pre-trained models motivate AD detection modeling to shift from low-level features to high-level representations.
This paper presents several efficient methods to extract better AD-related cues from high-level acoustic and linguistic features.
arXiv Detail & Related papers (2023-03-14T16:03:28Z) - Almanac: Retrieval-Augmented Language Models for Clinical Medicine [1.5505279143287174]
We develop Almanac, a large language model framework augmented with retrieval capabilities for medical guideline and treatment recommendations.
Performance on a novel dataset of clinical scenarios evaluated by a panel of 5 board-certified and resident physicians demonstrates significant increases in factuality.
arXiv Detail & Related papers (2023-03-01T02:30:11Z) - Semantic Coherence Markers for the Early Diagnosis of the Alzheimer
Disease [0.0]
Perplexity was originally conceived as an information-theoretic measure to assess how much a given language model is suited to predict a text sequence.
We employed language models as diverse as N-grams, from 2-grams to 5-grams, and GPT-2, a transformer-based language model.
Best performing models achieved full accuracy and F-score (1.00 in both precision/specificity and recall/sensitivity) in categorizing subjects from both the AD class and control subjects.
arXiv Detail & Related papers (2023-02-02T11:40:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.