Improving Mandarin Prosodic Structure Prediction with Multi-level
Contextual Information
- URL: http://arxiv.org/abs/2308.16577v1
- Date: Thu, 31 Aug 2023 09:19:15 GMT
- Title: Improving Mandarin Prosodic Structure Prediction with Multi-level
Contextual Information
- Authors: Jie Chen, Changhe Song, Deyi Tuo, Xixin Wu, Shiyin Kang, Zhiyong Wu,
Helen Meng
- Abstract summary: This work proposes to use inter-utterance linguistic information to improve the performance of prosodic structure prediction (PSP)
Our method achieves better F1 scores in predicting prosodic word (PW), prosodic phrase (PPH) and intonational phrase (IPH)
- Score: 68.89000132126536
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: For text-to-speech (TTS) synthesis, prosodic structure prediction (PSP) plays
an important role in producing natural and intelligible speech. Although
inter-utterance linguistic information can influence the speech interpretation
of the target utterance, previous works on PSP mainly focus on utilizing
intrautterance linguistic information of the current utterance only. This work
proposes to use inter-utterance linguistic information to improve the
performance of PSP. Multi-level contextual information, which includes both
inter-utterance and intrautterance linguistic information, is extracted by a
hierarchical encoder from character level, utterance level and discourse level
of the input text. Then a multi-task learning (MTL) decoder predicts prosodic
boundaries from multi-level contextual information. Objective evaluation
results on two datasets show that our method achieves better F1 scores in
predicting prosodic word (PW), prosodic phrase (PPH) and intonational phrase
(IPH). It demonstrates the effectiveness of using multi-level contextual
information for PSP. Subjective preference tests also indicate the naturalness
of synthesized speeches are improved.
Related papers
- Resolving Word Vagueness with Scenario-guided Adapter for Natural Language Inference [24.58277380514406]
Natural Language Inference (NLI) is a crucial task in natural language processing.
We propose an innovative ScenaFuse adapter that simultaneously integrates large-scale pre-trained linguistic knowledge and relevant visual information.
Our approach bridges the gap between language and vision, leading to improved understanding and inference capabilities in NLI tasks.
arXiv Detail & Related papers (2024-05-21T01:19:52Z) - Towards a Deep Understanding of Multilingual End-to-End Speech
Translation [52.26739715012842]
We analyze representations learnt in a multilingual end-to-end speech translation model trained over 22 languages.
We derive three major findings from our analysis.
arXiv Detail & Related papers (2023-10-31T13:50:55Z) - Can Linguistic Knowledge Improve Multimodal Alignment in Vision-Language
Pretraining? [34.609984453754656]
We aim to elucidate the impact of comprehensive linguistic knowledge, including semantic expression and syntactic structure, on multimodal alignment.
Specifically, we design and release the SNARE, the first large-scale multimodal alignment probing benchmark.
arXiv Detail & Related papers (2023-08-24T16:17:40Z) - Textless Unit-to-Unit training for Many-to-Many Multilingual Speech-to-Speech Translation [65.13824257448564]
This paper proposes a textless training method for many-to-many multilingual speech-to-speech translation.
By treating the speech units as pseudo-text, we can focus on the linguistic content of the speech.
We demonstrate that the proposed UTUT model can be effectively utilized not only for Speech-to-Speech Translation (S2ST) but also for multilingual Text-to-Speech Synthesis (T2S) and Text-to-Speech Translation (T2ST)
arXiv Detail & Related papers (2023-08-03T15:47:04Z) - The Interpreter Understands Your Meaning: End-to-end Spoken Language
Understanding Aided by Speech Translation [13.352795145385645]
Speech translation (ST) is a good means of pretraining speech models for end-to-end spoken language understanding.
We show that our models reach higher performance over baselines on monolingual and multilingual intent classification.
We also create new benchmark datasets for speech summarization and low-resource/zero-shot transfer from English to French or Spanish.
arXiv Detail & Related papers (2023-05-16T17:53:03Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z) - Unified Speech-Text Pre-training for Speech Translation and Recognition [113.31415771943162]
We describe a method to jointly pre-train speech and text in an encoder-decoder modeling framework for speech translation and recognition.
The proposed method incorporates four self-supervised and supervised subtasks for cross modality learning.
It achieves between 1.7 and 2.3 BLEU improvement above the state of the art on the MuST-C speech translation dataset.
arXiv Detail & Related papers (2022-04-11T20:59:51Z) - SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text
Joint Pre-Training [33.02912456062474]
We build a single encoder with the BERT objective on unlabeled text together with the w2v-BERT objective on unlabeled speech.
We demonstrate that incorporating both speech and text data during pre-training can significantly improve downstream quality on CoVoST2 speech translation.
arXiv Detail & Related papers (2021-10-20T00:59:36Z) - Multilingual Neural RST Discourse Parsing [24.986030179701405]
We investigate two approaches to establish a neural, cross-lingual discourse via multilingual vector representations and segment-level translation.
Experiment results show that both methods are effective even with limited training data, and achieve state-of-the-art performance on cross-lingual, document-level discourse parsing.
arXiv Detail & Related papers (2020-12-03T05:03:38Z) - SPLAT: Speech-Language Joint Pre-Training for Spoken Language
Understanding [61.02342238771685]
Spoken language understanding requires a model to analyze input acoustic signal to understand its linguistic content and make predictions.
Various pre-training methods have been proposed to learn rich representations from large-scale unannotated speech and text.
We propose a novel semi-supervised learning framework, SPLAT, to jointly pre-train the speech and language modules.
arXiv Detail & Related papers (2020-10-05T19:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.