Audio-Visual Speech Separation Using Cross-Modal Correspondence Loss
- URL: http://arxiv.org/abs/2103.01463v1
- Date: Tue, 2 Mar 2021 04:29:26 GMT
- Title: Audio-Visual Speech Separation Using Cross-Modal Correspondence Loss
- Authors: Naoki Makishima, Mana Ihori, Akihiko Takashima, Tomohiro Tanaka, Shota
Orihashi, Ryo Masumura
- Abstract summary: We present an audio-visual speech separation learning method.
It considers the correspondence between the separated signals and the visual signals to reflect the speech characteristics.
- Score: 28.516240952627083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an audio-visual speech separation learning method that considers
the correspondence between the separated signals and the visual signals to
reflect the speech characteristics during training. Audio-visual speech
separation is a technique to estimate the individual speech signals from a
mixture using the visual signals of the speakers. Conventional studies on
audio-visual speech separation mainly train the separation model on the
audio-only loss, which reflects the distance between the source signals and the
separated signals. However, conventional losses do not reflect the
characteristics of the speech signals, including the speaker's characteristics
and phonetic information, which leads to distortion or remaining noise. To
address this problem, we propose the cross-modal correspondence (CMC) loss,
which is based on the cooccurrence of the speech signal and the visual signal.
Since the visual signal is not affected by background noise and contains
speaker and phonetic information, using the CMC loss enables the audio-visual
speech separation model to remove noise while preserving the speech
characteristics. Experimental results demonstrate that the proposed method
learns the cooccurrence on the basis of CMC loss, which improves separation
performance.
Related papers
- Audio-visual End-to-end Multi-channel Speech Separation, Dereverberation
and Recognition [52.11964238935099]
An audio-visual multi-channel speech separation, dereverberation and recognition approach is proposed in this paper.
Video input is consistently demonstrated in mask-based MVDR speech separation, DNN-WPE or spectral mapping (SpecM) based speech dereverberation front-end.
Experiments were conducted on the mixture overlapped and reverberant speech data constructed using simulation or replay of the Oxford LRS2 dataset.
arXiv Detail & Related papers (2023-07-06T10:50:46Z) - Speech inpainting: Context-based speech synthesis guided by video [29.233167442719676]
This paper focuses on the problem of audio-visual speech inpainting, which is the task of synthesizing the speech in a corrupted audio segment.
We present an audio-visual transformer-based deep learning model that leverages visual cues that provide information about the content of the corrupted audio.
We also show how visual features extracted with AV-HuBERT, a large audio-visual transformer for speech recognition, are suitable for synthesizing speech.
arXiv Detail & Related papers (2023-06-01T09:40:47Z) - Language-Guided Audio-Visual Source Separation via Trimodal Consistency [64.0580750128049]
A key challenge in this task is learning to associate the linguistic description of a sound-emitting object to its visual features and the corresponding components of the audio waveform.
We adapt off-the-shelf vision-language foundation models to provide pseudo-target supervision via two novel loss functions.
We demonstrate the effectiveness of our self-supervised approach on three audio-visual separation datasets.
arXiv Detail & Related papers (2023-03-28T22:45:40Z) - Audio-visual multi-channel speech separation, dereverberation and
recognition [70.34433820322323]
This paper proposes an audio-visual multi-channel speech separation, dereverberation and recognition approach.
The advantage of the additional visual modality over using audio only is demonstrated on two neural dereverberation approaches.
Experiments conducted on the LRS2 dataset suggest that the proposed audio-visual multi-channel speech separation, dereverberation and recognition system outperforms the baseline.
arXiv Detail & Related papers (2022-04-05T04:16:03Z) - Disentangling speech from surroundings with neural embeddings [17.958451380305892]
We present a method to separate speech signals from noisy environments in the embedding space of a neural audio.
We introduce a new training procedure that allows our model to produce structured encodings of audio waveforms given by embedding vectors.
arXiv Detail & Related papers (2022-03-29T13:58:33Z) - VisualVoice: Audio-Visual Speech Separation with Cross-Modal Consistency [111.55430893354769]
Given a video, the goal is to extract the speech associated with a face in spite of simultaneous background sounds and/or other human speakers.
Our approach jointly learns audio-visual speech separation and cross-modal speaker embeddings from unlabeled video.
It yields state-of-the-art results on five benchmark datasets for audio-visual speech separation and enhancement.
arXiv Detail & Related papers (2021-01-08T18:25:24Z) - An Overview of Deep-Learning-Based Audio-Visual Speech Enhancement and
Separation [57.68765353264689]
Speech enhancement and speech separation are two related tasks.
Traditionally, these tasks have been tackled using signal processing and machine learning techniques.
Deep learning has been exploited to achieve strong performance.
arXiv Detail & Related papers (2020-08-21T17:24:09Z) - Audio-visual Multi-channel Recognition of Overlapped Speech [79.21950701506732]
This paper presents an audio-visual multi-channel overlapped speech recognition system featuring tightly integrated separation front-end and recognition back-end.
Experiments suggest that the proposed multi-channel AVSR system outperforms the baseline audio-only ASR system by up to 6.81% (26.83% relative) and 22.22% (56.87% relative) absolute word error rate (WER) reduction on overlapped speech constructed using either simulation or replaying of the lipreading sentence 2 dataset respectively.
arXiv Detail & Related papers (2020-05-18T10:31:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.