ReMi: A Random Recurrent Neural Network Approach to Music Production
- URL: http://arxiv.org/abs/2505.17023v2
- Date: Tue, 22 Jul 2025 15:56:12 GMT
- Title: ReMi: A Random Recurrent Neural Network Approach to Music Production
- Authors: Hugo Chateau-Laurent, Tara Vanhatalo, Wei-Tung Pan, Xavier Hinaut,
- Abstract summary: Generative artificial intelligence raises concerns related to energy consumption, copyright infringement and creative atrophy.<n>In contrast to end-to-end music generation that aims to replace musicians, our approach expands their creativity while requiring no data and much less computational power.
- Score: 1.6874375111244329
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative artificial intelligence raises concerns related to energy consumption, copyright infringement and creative atrophy. We show that randomly initialized recurrent neural networks can produce arpeggios and low-frequency oscillations that are rich and configurable. In contrast to end-to-end music generation that aims to replace musicians, our approach expands their creativity while requiring no data and much less computational power. More information can be found at: https://allendia.com/
Related papers
- Adaptive Accompaniment with ReaLchords [60.690020661819055]
We propose ReaLchords, an online generative model for improvising chord accompaniment to user melody.<n>We start with an online model pretrained by maximum likelihood, and use reinforcement learning to finetune the model for online use.
arXiv Detail & Related papers (2025-06-17T16:59:05Z) - Detecting Musical Deepfakes [0.0]
This study investigates the detection of AI-generated songs using the FakeMusicCaps dataset.<n>To simulate real-world adversarial conditions, tempo stretching and pitch shifting were applied to the dataset.<n>Mel spectrograms were generated from the modified audio, then used to train and evaluate a convolutional neural network.
arXiv Detail & Related papers (2025-05-03T21:45:13Z) - ReaLJam: Real-Time Human-AI Music Jamming with Reinforcement Learning-Tuned Transformers [53.63950017886757]
We introduce ReaLJam, an interface and protocol for live musical jamming sessions between a human and a Transformer-based AI agent trained with reinforcement learning.<n>We enable real-time interactions using the concept of anticipation, where the agent continually predicts how the performance will unfold and visually conveys its plan to the user.
arXiv Detail & Related papers (2025-02-28T17:42:58Z) - Unrolled Creative Adversarial Network For Generating Novel Musical Pieces [0.0]
generative adversarial networks (GANs) and their counterparts have been explored by very few researchersfor music generation.<n>In this paper, a classical system was employed alongside a new system to generate creative music.<n>GANs are capable of generating novel outputs given a set of inputs to learn from and mimic their distribution.
arXiv Detail & Related papers (2024-12-31T14:07:59Z) - Interpretable Melody Generation from Lyrics with Discrete-Valued
Adversarial Training [12.02541352832997]
Gumbel-Softmax is exploited to solve the non-differentiability problem of generating music attributes by Generative Adversarial Networks (GANs)
Users can listen to the generated AI song as well as recreate a new song by selecting from recommended music attributes.
arXiv Detail & Related papers (2022-06-30T05:45:47Z) - Co-creation and ownership for AI radio [1.2839524529089017]
We present Artificial$.!$fm, a proof-of-concept casual creator that blends AI-music generation, subjective ratings, and personalized recommendation.
We report on the design and development of Artificial$.!$fm, and provide a legal analysis on the ownership of artifacts generated on the platform.
arXiv Detail & Related papers (2022-06-01T13:35:03Z) - Music Generation Using an LSTM [52.77024349608834]
Long Short-Term Memory (LSTM) network structures have proven to be very useful for making predictions for the next output in a series.
We demonstrate an approach of music generation using Recurrent Neural Networks (RNN)
We provide a brief synopsis of the intuition, theory, and application of LSTMs in music generation, develop and present the network we found to best achieve this goal, identify and address issues and challenges faced, and include potential future improvements for our network.
arXiv Detail & Related papers (2022-03-23T00:13:41Z) - Can GAN originate new electronic dance music genres? -- Generating novel
rhythm patterns using GAN with Genre Ambiguity Loss [0.0]
This paper focuses on music generation, especially rhythm patterns of electronic dance music, and discusses if we can use deep learning to generate novel rhythms.
We extend the framework of Generative Adversarial Networks(GAN) and encourage it to diverge from the dataset's inherent distributions.
The paper shows that our proposed GAN can generate rhythm patterns that sound like music rhythms but do not belong to any genres in the training dataset.
arXiv Detail & Related papers (2020-11-25T23:22:12Z) - Melody-Conditioned Lyrics Generation with SeqGANs [81.2302502902865]
We propose an end-to-end melody-conditioned lyrics generation system based on Sequence Generative Adversarial Networks (SeqGAN)
We show that the input conditions have no negative impact on the evaluation metrics while enabling the network to produce more meaningful results.
arXiv Detail & Related papers (2020-10-28T02:35:40Z) - Dance Revolution: Long-Term Dance Generation with Music via Curriculum
Learning [55.854205371307884]
We formalize the music-conditioned dance generation as a sequence-to-sequence learning problem.
We propose a novel curriculum learning strategy to alleviate error accumulation of autoregressive models in long motion sequence generation.
Our approach significantly outperforms the existing state-of-the-arts on automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-06-11T00:08:25Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.