Graph Modelling Analysis of Speech-Gesture Interaction for Aphasia Severity Estimation
- URL: http://arxiv.org/abs/2602.20163v1
- Date: Tue, 27 Jan 2026 14:11:36 GMT
- Title: Graph Modelling Analysis of Speech-Gesture Interaction for Aphasia Severity Estimation
- Authors: Navya Martin Kollapally, Christa Akers, Renjith Nelson Joseph,
- Abstract summary: Aphasia is an acquired language disorder caused by injury to the regions of the brain that are responsible for language.<n>Recent advancements in speech analysis focus on automated estimation of aphasia severity from spontaneous speech.<n>In this work, we propose a graph neural network-based framework for estimating aphasia severity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Aphasia is an acquired language disorder caused by injury to the regions of the brain that are responsible for language. Aphasia may impair the use and comprehension of written and spoken language. The Western Aphasia Battery-Revised (WAB-R) is an assessment tool administered by speech-language pathologists (SLPs) to evaluate the aphasia type and severity. Because the WAB-R measures isolated linguistic skills, there has been growing interest in the assessment of discourse production as a more holistic representation of everyday language abilities. Recent advancements in speech analysis focus on automated estimation of aphasia severity from spontaneous speech, relying mostly in isolated linguistic or acoustical features. In this work, we propose a graph neural network-based framework for estimating aphasia severity. We represented each participant's discourse as a directed multi-modal graph, where nodes represent lexical items and gestures and edges encode word-word, gesture-word, and word-gesture transitions. GraphSAGE is employed to learn participant-level embeddings, thus integrating information from immediate neighbors and overall graph structure. Our results suggest that aphasia severity is not encoded in isolated lexical distribution, but rather emerges from structured interactions between speech and gesture. The proposed architecture offers a reliable automated aphasia assessment, with possible uses in bedside screening and telehealth-based monitoring.
Related papers
- Linguistic Indicators of Early Cognitive Decline in the DementiaBank Pitt Corpus: A Statistical and Machine Learning Study [4.417564179511245]
This study analyzes spontaneous speech transcripts from the DementiaBank Pitt Corpus using three linguistic representations.<n> syntactic and grammatical features retain strong discriminative power even in the absence of lexical content.<n>This study supports the use of linguistically grounded features for transparent and reliable language-based cognitive screening.
arXiv Detail & Related papers (2026-02-11T16:53:57Z) - Revisiting Modality Invariance in a Multilingual Speech-Text Model via Neuron-Level Analysis [15.638379666159127]
We investigate where language and modality information is encoded, how selective neurons causally influence decoding, and how concentrated this influence is across the network.<n>We identify language- and modality-selective neurons using average-precision ranking, investigate their functional role via median-replacement interventions at inference time, and analyze activation-magnitude inequality across languages and modalities.
arXiv Detail & Related papers (2026-01-24T09:22:18Z) - Towards Inclusive Communication: A Unified Framework for Generating Spoken Language from Sign, Lip, and Audio [52.859261069569165]
We propose the first unified framework capable of handling diverse combinations of sign language, lip movements, and audio for spoken-language text generation.<n>We focus on three main objectives: (i) designing a unified, modality-agnostic architecture capable of effectively processing heterogeneous inputs; (ii) exploring the underexamined synergy among modalities, particularly the role of lip movements as non-manual cues in sign language comprehension; and (iii) achieving performance on par with or better than state-of-the-art models specialized for individual tasks.
arXiv Detail & Related papers (2025-08-28T06:51:42Z) - Mechanistic Understanding and Mitigation of Language Confusion in English-Centric Large Language Models [56.61984030508691]
We present the first mechanistic interpretability study of language confusion.<n>We show that confusion points (CPs) are central to this phenomenon.<n>We show that editing a small set of critical neurons, identified via comparative analysis with a multilingual-tuned counterpart, substantially mitigates confusion.
arXiv Detail & Related papers (2025-05-22T11:29:17Z) - Language-Agnostic Analysis of Speech Depression Detection [2.5764071253486636]
This work analyzes automatic speech-based depression detection across two languages, English and Malayalam.
A CNN model is trained to identify acoustic features associated with depression in speech, focusing on both languages.
Our findings and collected data could contribute to the development of language-agnostic speech-based depression detection systems.
arXiv Detail & Related papers (2024-09-23T07:35:56Z) - Acoustic characterization of speech rhythm: going beyond metrics with
recurrent neural networks [0.0]
We train a recurrent neural network on a language identification task over a large database of speech recordings in 21 languages.
The network was able to identify the language of 10-second recordings in 40% of the cases, and the language was in the top-3 guesses in two-thirds of the cases.
arXiv Detail & Related papers (2024-01-22T09:49:44Z) - BrainLLM: Generative Language Decoding from Brain Recordings [77.66707255697706]
We propose a generative language BCI that utilizes the capacity of a large language model and a semantic brain decoder.<n>The proposed model can generate coherent language sequences aligned with the semantic content of visual or auditory language stimuli.<n>Our findings demonstrate the potential and feasibility of employing BCIs in direct language generation.
arXiv Detail & Related papers (2023-11-16T13:37:21Z) - Reformulating NLP tasks to Capture Longitudinal Manifestation of
Language Disorders in People with Dementia [18.964022118823532]
We learn linguistic disorder patterns by making use of a moderately-sized pre-trained language model.
We then use the probability estimates from the best model to construct digital linguistic markers.
Our proposed linguistic disorder markers provide useful insights into gradual language impairment associated with disease progression.
arXiv Detail & Related papers (2023-10-15T17:58:47Z) - Automatically measuring speech fluency in people with aphasia: first
achievements using read-speech data [55.84746218227712]
This study aims at assessing the relevance of a signalprocessingalgorithm, initially developed in the field of language acquisition, for the automatic measurement of speech fluency.
arXiv Detail & Related papers (2023-08-09T07:51:40Z) - Decoding speech perception from non-invasive brain recordings [48.46819575538446]
We introduce a model trained with contrastive-learning to decode self-supervised representations of perceived speech from non-invasive recordings.
Our model can identify, from 3 seconds of MEG signals, the corresponding speech segment with up to 41% accuracy out of more than 1,000 distinct possibilities.
arXiv Detail & Related papers (2022-08-25T10:01:43Z) - Pose-based Body Language Recognition for Emotion and Psychiatric Symptom
Interpretation [75.3147962600095]
We propose an automated framework for body language based emotion recognition starting from regular RGB videos.
In collaboration with psychologists, we extend the framework for psychiatric symptom prediction.
Because a specific application domain of the proposed framework may only supply a limited amount of data, the framework is designed to work on a small training set.
arXiv Detail & Related papers (2020-10-30T18:45:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.