RFWave: Multi-band Rectified Flow for Audio Waveform Reconstruction
- URL: http://arxiv.org/abs/2403.05010v3
- Date: Mon, 07 Oct 2024 02:08:05 GMT
- Title: RFWave: Multi-band Rectified Flow for Audio Waveform Reconstruction
- Authors: Peng Liu, Dongyang Dai, Zhiyong Wu,
- Abstract summary: We introduce RFWave, a cutting-edge multi-band Rectified Flow approach to reconstruct high-fidelity audio waveforms from Mel-spectrograms or discrete acoustic tokens.
RFWave uniquely generates complex spectrograms and operates at the frame level, processing all subbands simultaneously to boost efficiency.
Our empirical evaluations show that RFWave not only provides outstanding reconstruction quality but also offers vastly superior computational efficiency, enabling audio generation at speeds up to 160 times faster than real-time on a GPU.
- Score: 12.64898580131053
- License:
- Abstract: Recent advancements in generative modeling have significantly enhanced the reconstruction of audio waveforms from various representations. While diffusion models are adept at this task, they are hindered by latency issues due to their operation at the individual sample point level and the need for numerous sampling steps. In this study, we introduce RFWave, a cutting-edge multi-band Rectified Flow approach designed to reconstruct high-fidelity audio waveforms from Mel-spectrograms or discrete acoustic tokens. RFWave uniquely generates complex spectrograms and operates at the frame level, processing all subbands simultaneously to boost efficiency. Leveraging Rectified Flow, which targets a straight transport trajectory, RFWave achieves reconstruction with just 10 sampling steps. Our empirical evaluations show that RFWave not only provides outstanding reconstruction quality but also offers vastly superior computational efficiency, enabling audio generation at speeds up to 160 times faster than real-time on a GPU. An online demonstration is available at: https://rfwave-demo.github.io/rfwave/.
Related papers
- Accelerating High-Fidelity Waveform Generation via Adversarial Flow Matching Optimization [37.35829410807451]
This paper introduces PeriodWave-Turbo, a high-fidelity and high-efficient waveform generation model via adversarial flow matching optimization.
It only requires 1,000 steps of fine-tuning to achieve state-of-the-art performance across various objective metrics.
By scaling up the backbone of PeriodWave from 29M to 70M parameters for improved generalization, PeriodWave-Turbo achieves unprecedented performance.
arXiv Detail & Related papers (2024-08-15T08:34:00Z) - PeriodWave: Multi-Period Flow Matching for High-Fidelity Waveform Generation [37.35829410807451]
We propose PeriodWave, a novel universal waveform generation model.
We introduce a period-aware flow matching estimator that can capture the periodic features of the waveform signal.
We also propose a single period-conditional universal estimator that can feed-forward parallel by period-wise batch inference.
arXiv Detail & Related papers (2024-08-14T13:36:17Z) - WaveDH: Wavelet Sub-bands Guided ConvNet for Efficient Image Dehazing [20.094839751816806]
We introduce WaveDH, a novel and compact ConvNet designed to address this efficiency gap in image dehazing.
Our WaveDH leverages wavelet sub-bands for guided up-and-downsampling and frequency-aware feature refinement.
Our method, WaveDH, outperforms many state-of-the-art methods on several image dehazing benchmarks with significantly reduced computational costs.
arXiv Detail & Related papers (2024-04-02T02:52:05Z) - WaveNeRF: Wavelet-based Generalizable Neural Radiance Fields [149.2296890464997]
We design WaveNeRF, which integrates wavelet frequency decomposition into MVS and NeRF.
WaveNeRF achieves superior generalizable radiance field modeling when only given three images as input.
arXiv Detail & Related papers (2023-08-09T09:24:56Z) - From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion [84.138804145918]
Deep generative models can generate high-fidelity audio conditioned on various types of representations.
These models are prone to generate audible artifacts when the conditioning is flawed or imperfect.
We propose a high-fidelity multi-band diffusion-based framework that generates any type of audio modality from low-bitrate discrete representations.
arXiv Detail & Related papers (2023-08-02T22:14:29Z) - WaveDM: Wavelet-Based Diffusion Models for Image Restoration [43.254438752311714]
Wavelet-Based Diffusion Model (WaveDM) learns the distribution of clean images in the wavelet domain conditioned on the wavelet spectrum of degraded images after wavelet transform.
WaveDM achieves state-of-the-art performance with the efficiency that is comparable to traditional one-pass methods.
arXiv Detail & Related papers (2023-05-23T08:41:04Z) - Synthetic Wave-Geometric Impulse Responses for Improved Speech
Dereverberation [69.1351513309953]
We show that accurately simulating the low-frequency components of Room Impulse Responses (RIRs) is important to achieving good dereverberation.
We demonstrate that speech dereverberation models trained on hybrid synthetic RIRs outperform models trained on RIRs generated by prior geometric ray tracing methods.
arXiv Detail & Related papers (2022-12-10T20:15:23Z) - Wavelet Diffusion Models are fast and scalable Image Generators [3.222802562733787]
Diffusion models are a powerful solution for high-fidelity image generation, which exceeds GANs in quality in many circumstances.
Recent DiffusionGAN method significantly decreases the models' running time by reducing the number of sampling steps from thousands to several, but their speeds still largely lag behind the GAN counterparts.
This paper aims to reduce the speed gap by proposing a novel wavelet-based diffusion scheme.
We extract low-and-high frequency components from both image and feature levels via wavelet decomposition and adaptively handle these components for faster processing while maintaining good generation quality.
arXiv Detail & Related papers (2022-11-29T12:25:25Z) - FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech
Synthesis [90.3069686272524]
This paper proposes FastDiff, a fast conditional diffusion model for high-quality speech synthesis.
FastDiff employs a stack of time-aware location-variable convolutions of diverse receptive field patterns to efficiently model long-term time dependencies.
Based on FastDiff, we design an end-to-end text-to-speech synthesizer, FastDiff-TTS, which generates high-fidelity speech waveforms.
arXiv Detail & Related papers (2022-04-21T07:49:09Z) - WaveFill: A Wavelet-based Generation Network for Image Inpainting [57.012173791320855]
WaveFill is a wavelet-based inpainting network that decomposes images into multiple frequency bands.
WaveFill decomposes images by using discrete wavelet transform (DWT) that preserves spatial information naturally.
It applies L1 reconstruction loss to the low-frequency bands and adversarial loss to high-frequency bands, hence effectively mitigate inter-frequency conflicts.
arXiv Detail & Related papers (2021-07-23T04:44:40Z) - WaveGrad: Estimating Gradients for Waveform Generation [55.405580817560754]
WaveGrad is a conditional model for waveform generation which estimates gradients of the data density.
It starts from a Gaussian white noise signal and iteratively refines the signal via a gradient-based sampler conditioned on the mel-spectrogram.
We find that it can generate high fidelity audio samples using as few as six iterations.
arXiv Detail & Related papers (2020-09-02T17:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.