FluentEditor+: Text-based Speech Editing by Modeling Local Hierarchical Acoustic Smoothness and Global Prosody Consistency
- URL: http://arxiv.org/abs/2410.03719v1
- Date: Sat, 28 Sep 2024 10:18:35 GMT
- Title: FluentEditor+: Text-based Speech Editing by Modeling Local Hierarchical Acoustic Smoothness and Global Prosody Consistency
- Authors: Rui Liu, Jiatian Xi, Ziyue Jiang, Haizhou Li,
- Abstract summary: Text-based speech editing (TSE) allows users to modify speech by editing the corresponding text and performing operations such as cutting, copying, and pasting.
Current TSE techniques focus on minimizing discrepancies between generated speech and reference targets within edited segments.
seamlessly integrating edited segments with unaltered portions of the audio remains challenging.
This paper introduces a novel approach, FluentEditor$tiny +$, designed to overcome these limitations.
- Score: 40.95700389032375
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text-based speech editing (TSE) allows users to modify speech by editing the corresponding text and performing operations such as cutting, copying, and pasting to generate updated audio without altering the original recording directly. Text-based speech editing (TSE) allows users to modify speech by editing the corresponding text and performing operations such as cutting, copying, and pasting to generate updated audio without altering the original recording directly. While current TSE techniques focus on minimizing discrepancies between generated speech and reference targets within edited segments, they often neglect the importance of maintaining both local and global fluency in the context of the original discourse. Additionally, seamlessly integrating edited segments with unaltered portions of the audio remains challenging, typically requiring support from text-to-speech (TTS) systems. This paper introduces a novel approach, FluentEditor$\tiny +$, designed to overcome these limitations. FluentEditor$\tiny +$ employs advanced feature extraction techniques to capture both acoustic and prosodic characteristics, ensuring fluent transitions between edited and unedited regions. The model ensures segmental acoustic smoothness and global prosody consistency, allowing seamless splicing of speech while preserving the coherence and naturalness of the output. Extensive experiments on the VCTK and LibriTTS datasets show that FluentEditor$\tiny +$ surpasses existing TTS-based methods, including Editspeech, Campnet, $A^3T$ FluentSpeech, and Fluenteditor, in both fluency and prosody. Ablation studies further highlight the contributions of each module to the overall effectiveness of the system.
Related papers
- Language-Guided Joint Audio-Visual Editing via One-Shot Adaptation [56.92841782969847]
We introduce a novel task called language-guided joint audio-visual editing.
Given an audio and image pair of a sounding event, this task aims at generating new audio-visual content by editing the given sounding event conditioned on the language guidance.
We propose a new diffusion-based framework for joint audio-visual editing and introduce two key ideas.
arXiv Detail & Related papers (2024-10-09T22:02:30Z) - DiffEditor: Enhancing Speech Editing with Semantic Enrichment and Acoustic Consistency [20.3466261946094]
We introduce DiffEditor, a novel speech editing model designed to enhance performance in OOD text scenarios.
We enrich the semantic information of phoneme embeddings by integrating word embeddings extracted from a pretrained language model.
We propose a first-order loss function to promote smoother transitions at editing boundaries and enhance the overall fluency of the edited speech.
arXiv Detail & Related papers (2024-09-19T07:11:54Z) - Voice Attribute Editing with Text Prompt [48.48628304530097]
This paper introduces a novel task: voice attribute editing with text prompt.
The goal is to make relative modifications to voice attributes according to the actions described in the text prompt.
To solve this task, VoxEditor, an end-to-end generative model, is proposed.
arXiv Detail & Related papers (2024-04-13T00:07:40Z) - FluentEditor: Text-based Speech Editing by Considering Acoustic and
Prosody Consistency [44.7425844190807]
Text-based speech editing (TSE) techniques are designed to enable users to edit the output audio by modifying the input text transcript instead of the audio itself.
We propose a fluency speech editing model, termed textitFluentEditor, by considering fluency-aware training criterion in the TSE training.
The subjective and objective experimental results on VCTK demonstrate that our textitFluentEditor outperforms all advanced baselines in terms of naturalness and fluency.
arXiv Detail & Related papers (2023-09-21T01:58:01Z) - Text-only Domain Adaptation using Unified Speech-Text Representation in
Transducer [12.417314740402587]
We present a method to learn Unified Speech-Text Representation in Conformer Transducer(USTR-CT) to enable fast domain adaptation using the text-only corpus.
Experiments on adapting LibriSpeech to SPGISpeech show the proposed method reduces the word error rate(WER) by relatively 44% on the target domain.
arXiv Detail & Related papers (2023-06-07T00:33:02Z) - CampNet: Context-Aware Mask Prediction for End-to-End Text-Based Speech
Editing [67.96138567288197]
This paper proposes a novel end-to-end text-based speech editing method called context-aware mask prediction network (CampNet)
The model can simulate the text-based speech editing process by randomly masking part of speech and then predicting the masked region by sensing the speech context.
It can solve unnatural prosody in the edited region and synthesize the speech corresponding to the unseen words in the transcript.
arXiv Detail & Related papers (2022-02-21T02:05:14Z) - EdiTTS: Score-based Editing for Controllable Text-to-Speech [9.34612743192798]
EdiTTS is an off-the-shelf speech editing methodology based on score-based generative modeling for text-to-speech synthesis.
We apply coarse yet deliberate perturbations in the Gaussian prior space to induce desired behavior from the diffusion model.
Listening tests demonstrate that EdiTTS is capable of reliably generating natural-sounding audio that satisfies user-imposed requirements.
arXiv Detail & Related papers (2021-10-06T08:51:10Z) - Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration [62.75234183218897]
We propose a one-stage context-aware framework to generate natural and coherent target speech without any training data of the speaker.
We generate the mel-spectrogram of the edited speech with a transformer-based decoder.
It outperforms a recent zero-shot TTS engine by a large margin.
arXiv Detail & Related papers (2021-09-12T04:17:53Z) - Context-Aware Prosody Correction for Text-Based Speech Editing [28.459695630420832]
A major drawback of current systems is that edited recordings often sound unnatural because of prosody mismatches around edited regions.
We propose a new context-aware method for more natural sounding text-based editing of speech.
arXiv Detail & Related papers (2021-02-16T18:16:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.