Specific language impairment (SLI) detection pipeline from transcriptions of spontaneous narratives
- URL: http://arxiv.org/abs/2407.12012v1
- Date: Tue, 25 Jun 2024 19:22:57 GMT
- Title: Specific language impairment (SLI) detection pipeline from transcriptions of spontaneous narratives
- Authors: Santiago Arena, Antonio Quintero-Rincón,
- Abstract summary: Specific Language Impairment (SLI) is a disorder that affects communication and can affect both comprehension and expression.
This study focuses on effectively detecting SLI in children using transcripts of spontaneous narratives from 1063 interviews.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Specific Language Impairment (SLI) is a disorder that affects communication and can affect both comprehension and expression. This study focuses on effectively detecting SLI in children using transcripts of spontaneous narratives from 1063 interviews. A three-stage cascading pipeline was proposed f. In the first stage, feature extraction and dimensionality reduction of the data are performed using the Random Forest (RF) and Spearman correlation methods. In the second stage, the most predictive variables from the first stage are estimated using logistic regression, which is used in the last stage to detect SLI in children from transcripts of spontaneous narratives using a nearest neighbor classifier. The results revealed an accuracy of 97.13% in identifying SLI, highlighting aspects such as the length of the responses, the quality of their utterances, and the complexity of the language. This new approach, framed in natural language processing, offers significant benefits to the field of SLI detection by avoiding complex subjective variables and focusing on quantitative metrics directly related to the child's performance.
Related papers
- Multidimensional Analysis of Specific Language Impairment Using Unsupervised Learning Through PCA and Clustering [0.0]
Specific Language Impairment (SLI) affects approximately 7 percent of children.<n>Traditional diagnostic approaches often rely on standardized assessments, which may overlook subtle developmental patterns.<n>This study aims to identify natural language development trajectories in children with and without SLI using unsupervised machine learning techniques.
arXiv Detail & Related papers (2025-06-05T18:29:12Z) - The Empirical Impact of Data Sanitization on Language Models [1.1359551336076306]
This paper empirically analyzes the effects of data sanitization across several benchmark language-modeling tasks.
Our results suggest that for some tasks such as sentiment analysis or entailment, the impact of redaction is quite low, typically around 1-5%.
For tasks such as comprehension Q&A there is a big drop of >25% in performance observed in redacted queries as compared to the original.
arXiv Detail & Related papers (2024-11-08T21:22:37Z) - MALTO at SemEval-2024 Task 6: Leveraging Synthetic Data for LLM
Hallucination Detection [3.049887057143419]
In Natural Language Generation (NLG), contemporary Large Language Models (LLMs) face several challenges.
This often leads to neural networks exhibiting "hallucinations"
The SHROOM challenge focuses on automatically identifying these hallucinations in the generated text.
arXiv Detail & Related papers (2024-03-01T20:31:10Z) - Syntactic Language Change in English and German: Metrics, Parsers, and Convergences [56.47832275431858]
The current paper looks at diachronic trends in syntactic language change in both English and German, using corpora of parliamentary debates from the last c. 160 years.
We base our observations on five dependencys, including the widely used Stanford Core as well as 4 newer alternatives.
We show that changes in syntactic measures seem to be more frequent at the tails of sentence length distributions.
arXiv Detail & Related papers (2024-02-18T11:46:16Z) - Comparing Hallucination Detection Metrics for Multilingual Generation [62.97224994631494]
This paper assesses how well various factual hallucination detection metrics identify hallucinations in generated biographical summaries across languages.
We compare how well automatic metrics correlate to each other and whether they agree with human judgments of factuality.
Our analysis reveals that while the lexical metrics are ineffective, NLI-based metrics perform well, correlating with human annotations in many settings and often outperforming supervised models.
arXiv Detail & Related papers (2024-02-16T08:10:34Z) - Prosody in Cascade and Direct Speech-to-Text Translation: a case study
on Korean Wh-Phrases [79.07111754406841]
This work proposes using contrastive evaluation to measure the ability of direct S2TT systems to disambiguate utterances where prosody plays a crucial role.
Our results clearly demonstrate the value of direct translation systems over cascade translation models.
arXiv Detail & Related papers (2024-02-01T14:46:35Z) - Towards Lifelong Learning of Multilingual Text-To-Speech Synthesis [87.75833205560406]
This work presents a lifelong learning approach to train a multilingual Text-To-Speech (TTS) system.
It does not require pooled data from all languages altogether, and thus alleviates the storage and computation burden.
arXiv Detail & Related papers (2021-10-09T07:00:38Z) - Is Supervised Syntactic Parsing Beneficial for Language Understanding?
An Empirical Investigation [71.70562795158625]
Traditional NLP has long held (supervised) syntactic parsing necessary for successful higher-level semantic language understanding (LU)
Recent advent of end-to-end neural models, self-supervised via language modeling (LM), and their success on a wide range of LU tasks, questions this belief.
We empirically investigate the usefulness of supervised parsing for semantic LU in the context of LM-pretrained transformer networks.
arXiv Detail & Related papers (2020-08-15T21:03:36Z) - Towards Relevance and Sequence Modeling in Language Recognition [39.547398348702025]
We propose a neural network framework utilizing short-sequence information in language recognition.
A new model is proposed for incorporating relevance in language recognition, where parts of speech data are weighted more based on their relevance for the language recognition task.
Experiments are performed using the language recognition task in NIST LRE 2017 Challenge using clean, noisy and multi-speaker speech data.
arXiv Detail & Related papers (2020-04-02T18:31:18Z) - Identification of primary and collateral tracks in stuttered speech [22.921077940732]
We introduce a new evaluation framework for disfluency detection inspired by the clinical and NLP perspective.
We present a novel forced-aligned disfluency dataset from a corpus of semi-directed interviews.
We show experimentally that using word-based span features outperformed the baselines for speech-based predictions.
arXiv Detail & Related papers (2020-03-02T16:50:33Z) - The Secret is in the Spectra: Predicting Cross-lingual Task Performance
with Spectral Similarity Measures [83.53361353172261]
We present a large-scale study focused on the correlations between monolingual embedding space similarity and task performance.
We introduce several isomorphism measures between two embedding spaces, based on the relevant statistics of their individual spectra.
We empirically show that 1) language similarity scores derived from such spectral isomorphism measures are strongly associated with performance observed in different cross-lingual tasks.
arXiv Detail & Related papers (2020-01-30T00:09:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.