Raply: A profanity-mitigated rap generator
- URL: http://arxiv.org/abs/2407.06941v1
- Date: Tue, 9 Jul 2024 15:18:56 GMT
- Title: Raply: A profanity-mitigated rap generator
- Authors: Omar Manil Bendali, Samir Ferroum, Ekaterina Kozachenko, Youssef Parviz, Hanna Shcharbakova, Anna Tokareva, Shemair Williams,
- Abstract summary: Raply is a fine-tuned GPT-2 model capable of producing meaningful rhyming text in the style of rap.
It was achieved through the fine-tuning of the model on a new dataset Mitislurs, a profanity-mitigated corpus.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The task of writing rap is challenging and involves producing complex rhyming schemes, yet meaningful lyrics. In this work, we propose Raply, a fine-tuned GPT-2 model capable of producing meaningful rhyming text in the style of rap. In addition to its rhyming capabilities, the model is able to generate less offensive content. It was achieved through the fine-tuning the model on a new dataset Mitislurs, a profanity-mitigated corpus. We evaluate the output of the model on two criteria: 1) rhyming based on the rhyme density metric; 2) profanity content, using the list of profanities for the English language. To our knowledge, this is the first attempt at profanity mitigation for rap lyrics generation.
Related papers
- REFFLY: Melody-Constrained Lyrics Editing Model [50.03960548399128]
We introduce REFFLY, the first revision framework designed to edit arbitrary forms of plain text draft into high-quality, full-fledged song lyrics.
Our approach ensures that the generated lyrics retain the original meaning of the draft, align with the melody, and adhere to the desired song structures.
arXiv Detail & Related papers (2024-08-30T23:22:34Z) - Unsupervised Melody-to-Lyric Generation [91.29447272400826]
We propose a method for generating high-quality lyrics without training on any aligned melody-lyric data.
We leverage the segmentation and rhythm alignment between melody and lyrics to compile the given melody into decoding constraints.
Our model can generate high-quality lyrics that are more on-topic, singable, intelligible, and coherent than strong baselines.
arXiv Detail & Related papers (2023-05-30T17:20:25Z) - Unsupervised Melody-Guided Lyrics Generation [84.22469652275714]
We propose to generate pleasantly listenable lyrics without training on melody-lyric aligned data.
We leverage the crucial alignments between melody and lyrics and compile the given melody into constraints to guide the generation process.
arXiv Detail & Related papers (2023-05-12T20:57:20Z) - SongRewriter: A Chinese Song Rewriting System with Controllable Content
and Rhyme Scheme [32.60994266892925]
We propose a controllable Chinese lyrics generation and editing system which assists users without prior knowledge of melody composition.
The system is trained by a randomized multi-level masking strategy which produces a unified model for generating entirely new lyrics or editing a few fragments.
arXiv Detail & Related papers (2022-11-28T03:52:05Z) - DeepRapper: Neural Rap Generation with Rhyme and Rhythm Modeling [102.50840749005256]
Previous works for rap generation focused on rhyming lyrics but ignored rhythmic beats, which are important for rap performance.
In this paper, we develop DeepRapper, a Transformer-based rap generation system that can model both rhymes and rhythms.
arXiv Detail & Related papers (2021-07-05T09:01:46Z) - Generate and Revise: Reinforcement Learning in Neural Poetry [17.128639251861784]
We propose a framework to generate poems that are repeatedly revisited and corrected, as humans do, in order to improve their overall quality.
Our model generates poems from scratch and it learns to progressively adjust the generated text in order to match a target criterion.
We evaluate this approach in the case of matching a rhyming scheme, without having any information on which words are responsible of creating rhymes and on how to coherently alter the poem words.
arXiv Detail & Related papers (2021-02-08T10:35:33Z) - SongNet: Rigid Formats Controlled Text Generation [51.428634666559724]
We propose a simple and elegant framework named SongNet to tackle this problem.
The backbone of the framework is a Transformer-based auto-regressive language model.
A pre-training and fine-tuning framework is designed to further improve the generation quality.
arXiv Detail & Related papers (2020-04-17T01:40:18Z) - Rapformer: Conditional Rap Lyrics Generation with Denoising Autoencoders [14.479052867589417]
We develop a method for synthesizing a rap verse based on the content of any text (e.g., a news article)
Our method, called Rapformer, is based on training a Transformer-based denoising autoencoder to reconstruct rap lyrics from content words extracted from the lyrics.
Rapformer is capable of generating technically fluent verses that offer a good trade-off between content preservation and style transfer.
arXiv Detail & Related papers (2020-04-08T12:24:10Z) - Metaphoric Paraphrase Generation [58.592750281138265]
We use crowdsourcing to evaluate our results, as well as developing an automatic metric for evaluating metaphoric paraphrases.
We show that while the lexical replacement baseline is capable of producing accurate paraphrases, they often lack metaphoricity.
Our metaphor masking model excels in generating metaphoric sentences while performing nearly as well with regard to fluency and paraphrase quality.
arXiv Detail & Related papers (2020-02-28T16:30:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.