RBA-FE: A Robust Brain-Inspired Audio Feature Extractor for Depression   Diagnosis
        - URL: http://arxiv.org/abs/2506.07118v1
 - Date: Sun, 08 Jun 2025 13:00:45 GMT
 - Title: RBA-FE: A Robust Brain-Inspired Audio Feature Extractor for Depression   Diagnosis
 - Authors: Yu-Xuan Wu, Ziyan Huang, Bin Hu, Zhi-Hong Guan, 
 - Abstract summary: This article proposes a robust brain-inspired audio feature extractor (RBA-FE) model for depression diagnosis, using an improved hierarchical network architecture.<n>In order to tailor the noise challenge, RBA-FE leverages six acoustic features extracted from the raw audio, capturing both spatial characteristics and temporal dependencies.<n>To deal with the noise issues, our model incorporates an improved spiking neuron model, called adaptive rate smooth leaky integrate-and-fire (ARSLIF)
 - Score: 6.6826445546254964
 - License: http://creativecommons.org/licenses/by/4.0/
 - Abstract:   This article proposes a robust brain-inspired audio feature extractor (RBA-FE) model for depression diagnosis, using an improved hierarchical network architecture. Most deep learning models achieve state-of-the-art performance for image-based diagnostic tasks, ignoring the counterpart audio features. In order to tailor the noise challenge, RBA-FE leverages six acoustic features extracted from the raw audio, capturing both spatial characteristics and temporal dependencies. This hybrid attribute helps alleviate the precision limitation in audio feature extraction within other learning models like deep residual shrinkage networks. To deal with the noise issues, our model incorporates an improved spiking neuron model, called adaptive rate smooth leaky integrate-and-fire (ARSLIF). The ARSLIF model emulates the mechanism of ``retuning of cellular signal selectivity" in the brain attention systems, which enhances the model robustness against environmental noises in audio data. Experimental results demonstrate that RBA-FE achieves state-of-the-art accuracy on the MODMA dataset, respectively with 0.8750, 0.8974, 0.8750 and 0.8750 in precision, accuracy, recall and F1 score. Extensive experiments on the AVEC2014 and DAIC-WOZ datasets both show enhancements in noise robustness. It is further indicated by comparison that the ARSLIF neuron model suggest the abnormal firing pattern within the feature extraction on depressive audio data, offering brain-inspired interpretability. 
 
       
      
        Related papers
        - Unified AI for Accurate Audio Anomaly Detection [0.0]
This paper presents a unified AI framework for high-accuracy audio anomaly detection.<n>It integrates advanced noise reduction, feature extraction, and machine learning modeling techniques.<n>The framework is evaluated on benchmark datasets including TORGO and LibriSpeech.
arXiv  Detail & Related papers  (2025-05-20T16:56:08Z) - $C^2$AV-TSE: Context and Confidence-aware Audio Visual Target Speaker   Extraction [80.57232374640911]
We propose a model-agnostic strategy called the Mask-And-Recover (MAR)<n>MAR integrates both inter- and intra-modality contextual correlations to enable global inference within extraction modules.<n>To better target challenging parts within each sample, we introduce a Fine-grained Confidence Score (FCS) model.
arXiv  Detail & Related papers  (2025-04-01T13:01:30Z) - A Hybrid Framework for Statistical Feature Selection and Image-Based   Noise-Defect Detection [55.2480439325792]
This paper presents a hybrid framework that integrates both statistical feature selection and classification techniques to improve defect detection accuracy.<n>We present around 55 distinguished features that are extracted from industrial images, which are then analyzed using statistical methods.<n>By integrating these methods with flexible machine learning applications, the proposed framework improves detection accuracy and reduces false positives and misclassifications.
arXiv  Detail & Related papers  (2024-12-11T22:12:21Z) - STANet: A Novel Spatio-Temporal Aggregation Network for Depression   Classification with Small and Unbalanced FMRI Data [12.344849949026989]
We propose the Spatio-Temporal Aggregation Network (STANet) for diagnosing depression by integrating CNN and RNN to capture both temporal and spatial features.<n>Experiments demonstrate that STANet superior depression diagnostic performance with 82.38% accuracy and a 90.72% AUC.
arXiv  Detail & Related papers  (2024-07-31T04:06:47Z) - A multi-artifact EEG denoising by frequency-based deep learning [5.231056284485742]
We develop a novel EEG denoising model that operates in the frequency domain, leveraging prior knowledge about noise spectral features.
Performance evaluation on the EEGdenoiseNet dataset shows that the proposed model achieves optimal results according to both temporal and spectral metrics.
arXiv  Detail & Related papers  (2023-10-26T12:01:47Z) - Realistic Noise Synthesis with Diffusion Models [44.404059914652194]
Deep denoising models require extensive real-world training data, which is challenging to acquire.<n>We propose a novel Realistic Noise Synthesis Diffusor (RNSD) method using diffusion models to address these challenges.
arXiv  Detail & Related papers  (2023-05-23T12:56:01Z) - Brain Imaging-to-Graph Generation using Adversarial Hierarchical   Diffusion Models for MCI Causality Analysis [44.45598796591008]
Brain imaging-to-graph generation (BIGG) framework is proposed to map functional magnetic resonance imaging (fMRI) into effective connectivity for mild cognitive impairment analysis.
The hierarchical transformers in the generator are designed to estimate the noise at multiple scales.
 Evaluations of the ADNI dataset demonstrate the feasibility and efficacy of the proposed model.
arXiv  Detail & Related papers  (2023-05-18T06:54:56Z) - The role of noise in denoising models for anomaly detection in medical
  images [62.0532151156057]
Pathological brain lesions exhibit diverse appearance in brain images.
Unsupervised anomaly detection approaches have been proposed using only normal data for training.
We show that optimization of the spatial resolution and magnitude of the noise improves the performance of different model training regimes.
arXiv  Detail & Related papers  (2023-01-19T21:39:38Z) - Improving the Robustness of Summarization Models by Detecting and
  Removing Input Noise [50.27105057899601]
We present a large empirical study quantifying the sometimes severe loss in performance from different types of input noise for a range of datasets and model sizes.
We propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any training, auxiliary models, or even prior knowledge of the type of noise.
arXiv  Detail & Related papers  (2022-12-20T00:33:11Z) - Bridging the Gap Between Clean Data Training and Real-World Inference
  for Spoken Language Understanding [76.89426311082927]
Existing models are trained on clean data, which causes a textitgap between clean data training and real-world inference.
We propose a method from the perspective of domain adaptation, by which both high- and low-quality samples are embedding into similar vector space.
Experiments on the widely-used dataset, Snips, and large scale in-house dataset (10 million training examples) demonstrate that this method not only outperforms the baseline models on real-world (noisy) corpus but also enhances the robustness, that is, it produces high-quality results under a noisy environment.
arXiv  Detail & Related papers  (2021-04-13T17:54:33Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.