DeepRapper: Neural Rap Generation with Rhyme and Rhythm Modeling
- URL: http://arxiv.org/abs/2107.01875v1
- Date: Mon, 5 Jul 2021 09:01:46 GMT
- Title: DeepRapper: Neural Rap Generation with Rhyme and Rhythm Modeling
- Authors: Lanqing Xue, Kaitao Song, Duocai Wu, Xu Tan, Nevin L. Zhang, Tao Qin,
Wei-Qiang Zhang, Tie-Yan Liu
- Abstract summary: Previous works for rap generation focused on rhyming lyrics but ignored rhythmic beats, which are important for rap performance.
In this paper, we develop DeepRapper, a Transformer-based rap generation system that can model both rhymes and rhythms.
- Score: 102.50840749005256
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rap generation, which aims to produce lyrics and corresponding singing beats,
needs to model both rhymes and rhythms. Previous works for rap generation
focused on rhyming lyrics but ignored rhythmic beats, which are important for
rap performance. In this paper, we develop DeepRapper, a Transformer-based rap
generation system that can model both rhymes and rhythms. Since there is no
available rap dataset with rhythmic beats, we develop a data mining pipeline to
collect a large-scale rap dataset, which includes a large number of rap songs
with aligned lyrics and rhythmic beats. Second, we design a Transformer-based
autoregressive language model which carefully models rhymes and rhythms.
Specifically, we generate lyrics in the reverse order with rhyme representation
and constraint for rhyme enhancement and insert a beat symbol into lyrics for
rhythm/beat modeling. To our knowledge, DeepRapper is the first system to
generate rap with both rhymes and rhythms. Both objective and subjective
evaluations demonstrate that DeepRapper generates creative and high-quality
raps with rhymes and rhythms. Code will be released on GitHub.
Related papers
- Optimizing the Songwriting Process: Genre-Based Lyric Generation Using Deep Learning Models [2.703659575788133]
This project aims to simplify the traditional songwriting process with deep learning techniques.
We developed a unique preprocessing format using tokens to parse lyrics into individual verses.
We found that generation yielded higher recall (ROUGE) in the baseline model, but similar precision (BLEU) for both models.
arXiv Detail & Related papers (2024-09-15T21:32:46Z) - REFFLY: Melody-Constrained Lyrics Editing Model [50.03960548399128]
We introduce REFFLY, the first revision framework designed to edit arbitrary forms of plain text draft into high-quality, full-fledged song lyrics.
Our approach ensures that the generated lyrics retain the original meaning of the draft, align with the melody, and adhere to the desired song structures.
arXiv Detail & Related papers (2024-08-30T23:22:34Z) - Raply: A profanity-mitigated rap generator [0.0]
Raply is a fine-tuned GPT-2 model capable of producing meaningful rhyming text in the style of rap.
It was achieved through the fine-tuning of the model on a new dataset Mitislurs, a profanity-mitigated corpus.
arXiv Detail & Related papers (2024-07-09T15:18:56Z) - Unsupervised Melody-to-Lyric Generation [91.29447272400826]
We propose a method for generating high-quality lyrics without training on any aligned melody-lyric data.
We leverage the segmentation and rhythm alignment between melody and lyrics to compile the given melody into decoding constraints.
Our model can generate high-quality lyrics that are more on-topic, singable, intelligible, and coherent than strong baselines.
arXiv Detail & Related papers (2023-05-30T17:20:25Z) - Unsupervised Melody-Guided Lyrics Generation [84.22469652275714]
We propose to generate pleasantly listenable lyrics without training on melody-lyric aligned data.
We leverage the crucial alignments between melody and lyrics and compile the given melody into constraints to guide the generation process.
arXiv Detail & Related papers (2023-05-12T20:57:20Z) - SongRewriter: A Chinese Song Rewriting System with Controllable Content
and Rhyme Scheme [32.60994266892925]
We propose a controllable Chinese lyrics generation and editing system which assists users without prior knowledge of melody composition.
The system is trained by a randomized multi-level masking strategy which produces a unified model for generating entirely new lyrics or editing a few fragments.
arXiv Detail & Related papers (2022-11-28T03:52:05Z) - Re-creation of Creations: A New Paradigm for Lyric-to-Melody Generation [158.54649047794794]
Re-creation of Creations (ROC) is a new paradigm for lyric-to-melody generation.
ROC achieves good lyric-melody feature alignment in lyric-to-melody generation.
arXiv Detail & Related papers (2022-08-11T08:44:47Z) - Melody-Conditioned Lyrics Generation with SeqGANs [81.2302502902865]
We propose an end-to-end melody-conditioned lyrics generation system based on Sequence Generative Adversarial Networks (SeqGAN)
We show that the input conditions have no negative impact on the evaluation metrics while enabling the network to produce more meaningful results.
arXiv Detail & Related papers (2020-10-28T02:35:40Z) - Shimon the Rapper: A Real-Time System for Human-Robot Interactive Rap
Battles [1.7403133838762446]
We present a system for real-time lyrical improvisation between a human and a robot in the style of hip hop.
Our system takes vocal input from a human rapper, analyzes the semantic meaning, and generates a response that is rapped back by a robot over a musical groove.
arXiv Detail & Related papers (2020-09-19T14:04:54Z) - Rapformer: Conditional Rap Lyrics Generation with Denoising Autoencoders [14.479052867589417]
We develop a method for synthesizing a rap verse based on the content of any text (e.g., a news article)
Our method, called Rapformer, is based on training a Transformer-based denoising autoencoder to reconstruct rap lyrics from content words extracted from the lyrics.
Rapformer is capable of generating technically fluent verses that offer a good trade-off between content preservation and style transfer.
arXiv Detail & Related papers (2020-04-08T12:24:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.