Blind Room Parameter Estimation Using Multiple-Multichannel Speech
Recordings
- URL: http://arxiv.org/abs/2107.13832v1
- Date: Thu, 29 Jul 2021 08:51:49 GMT
- Title: Blind Room Parameter Estimation Using Multiple-Multichannel Speech
Recordings
- Authors: Prerak Srivastava, Antoine Deleforge, Emmanuel Vincent
- Abstract summary: Knowing the geometrical and acoustical parameters of a room may benefit applications such as audio augmented reality, speech dereverberation or audio forensics.
We study the problem of jointly estimating the total surface area, the volume, as well as the frequency-dependent reverberation time and mean surface absorption of a room.
A novel convolutional neural network architecture leveraging both single- and inter-channel cues is proposed and trained on a large, realistic simulated dataset.
- Score: 37.145413836886455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowing the geometrical and acoustical parameters of a room may benefit
applications such as audio augmented reality, speech dereverberation or audio
forensics. In this paper, we study the problem of jointly estimating the total
surface area, the volume, as well as the frequency-dependent reverberation time
and mean surface absorption of a room in a blind fashion, based on two-channel
noisy speech recordings from multiple, unknown source-receiver positions. A
novel convolutional neural network architecture leveraging both single- and
inter-channel cues is proposed and trained on a large, realistic simulated
dataset. Results on both simulated and real data show that using multiple
observations in one room significantly reduces estimation errors and variances
on all target quantities, and that using two channels helps the estimation of
surface and volume. The proposed model outperforms a recently proposed blind
volume estimation method on the considered datasets.
Related papers
- Unsupervised Blind Joint Dereverberation and Room Acoustics Estimation with Diffusion Models [21.669363620480333]
We present an unsupervised method for blind dereverberation and room impulse response estimation, called BUDDy.
In a blind scenario where the room impulse response is unknown, BUDDy successfully performs speech dereverberation.
Unlike supervised methods, which often struggle to generalize, BUDDy seamlessly adapts to different acoustic conditions.
arXiv Detail & Related papers (2024-08-14T11:31:32Z) - Real Acoustic Fields: An Audio-Visual Room Acoustics Dataset and Benchmark [65.79402756995084]
Real Acoustic Fields (RAF) is a new dataset that captures real acoustic room data from multiple modalities.
RAF is the first dataset to provide densely captured room acoustic data.
arXiv Detail & Related papers (2024-03-27T17:59:56Z) - Blind Acoustic Room Parameter Estimation Using Phase Features [4.473249957074495]
We propose utilizing novel phase-related features to extend recent approaches to blindly estimate the so-called "reverberation fingerprint" parameters.
The addition of these features is shown to outperform existing methods that rely solely on magnitude-based spectral features.
arXiv Detail & Related papers (2023-03-13T20:05:41Z) - Audio-visual multi-channel speech separation, dereverberation and
recognition [70.34433820322323]
This paper proposes an audio-visual multi-channel speech separation, dereverberation and recognition approach.
The advantage of the additional visual modality over using audio only is demonstrated on two neural dereverberation approaches.
Experiments conducted on the LRS2 dataset suggest that the proposed audio-visual multi-channel speech separation, dereverberation and recognition system outperforms the baseline.
arXiv Detail & Related papers (2022-04-05T04:16:03Z) - Deep Impulse Responses: Estimating and Parameterizing Filters with Deep
Networks [76.830358429947]
Impulse response estimation in high noise and in-the-wild settings is a challenging problem.
We propose a novel framework for parameterizing and estimating impulse responses based on recent advances in neural representation learning.
arXiv Detail & Related papers (2022-02-07T18:57:23Z) - Data Fusion for Audiovisual Speaker Localization: Extending Dynamic
Stream Weights to the Spatial Domain [103.3388198420822]
Esting the positions of multiple speakers can be helpful for tasks like automatic speech recognition or speaker diarization.
This paper proposes a novel audiovisual data fusion framework for speaker localization by assigning individual dynamic stream weights to specific regions.
A performance evaluation using audiovisual recordings yields promising results, with the proposed fusion approach outperforming all baseline models.
arXiv Detail & Related papers (2021-02-23T09:59:31Z) - Improved MVDR Beamforming Using LSTM Speech Models to Clean Spatial
Clustering Masks [14.942060304734497]
spatial clustering techniques can achieve significant multi-channel noise reduction across relatively arbitrary microphone configurations.
LSTM neural networks have successfully been trained to recognize speech from noise on single-channel inputs, but have difficulty taking full advantage of the information in multi-channel recordings.
This paper integrates these two approaches, training LSTM speech models to clean the masks generated by the Model-based EM Source Separation and Localization (MESSL) spatial clustering method.
arXiv Detail & Related papers (2020-12-02T22:35:00Z) - On End-to-end Multi-channel Time Domain Speech Separation in Reverberant
Environments [33.79711018198589]
This paper introduces a new method for multi-channel time domain speech separation in reverberant environments.
A fully-convolutional neural network structure has been used to directly separate speech from multiple microphone recordings.
To reduce the influence of reverberation on spatial feature extraction, a dereverberation pre-processing method has been applied.
arXiv Detail & Related papers (2020-11-11T18:25:07Z) - Temporal-Spatial Neural Filter: Direction Informed End-to-End
Multi-channel Target Speech Separation [66.46123655365113]
Target speech separation refers to extracting the target speaker's speech from mixed signals.
Two main challenges are the complex acoustic environment and the real-time processing requirement.
We propose a temporal-spatial neural filter, which directly estimates the target speech waveform from multi-speaker mixture.
arXiv Detail & Related papers (2020-01-02T11:12:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.