Opening Musical Creativity? Embedded Ideologies in Generative-AI Music Systems
- URL: http://arxiv.org/abs/2508.08805v1
- Date: Tue, 12 Aug 2025 09:59:07 GMT
- Title: Opening Musical Creativity? Embedded Ideologies in Generative-AI Music Systems
- Authors: Liam Pram, Fabio Morreale,
- Abstract summary: We look at four generative-AI music making systems available to the public as of mid-2025.<n>We investigate ideologies that are driving the early-stage development and adoption of generative-AI in music making.
- Score: 2.532202013576547
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: AI systems for music generation are increasingly common and easy to use, granting people without any musical background the ability to create music. Because of this, generative-AI has been marketed and celebrated as a means of democratizing music making. However, inclusivity often functions as marketable rhetoric rather than a genuine guiding principle in these industry settings. In this paper, we look at four generative-AI music making systems available to the public as of mid-2025 (AIVA, Stable Audio, Suno, and Udio) and track how they are rhetoricized by their developers, and received by users. Our aim is to investigate ideologies that are driving the early-stage development and adoption of generative-AI in music making, with a particular focus on democratization. A combination of autoethnography and digital ethnography is used to examine patterns and incongruities in rhetoric when positioned against product functionality. The results are then collated to develop a nuanced, contextual discussion. The shared ideology we map between producers and consumers is individualist, globalist, techno-liberal, and ethically evasive. It is a 'total ideology' which obfuscates individual responsibility, and through which the nature of music and musical practice is transfigured to suit generative outcomes.
Related papers
- MusicAIR: A Multimodal AI Music Generation Framework Powered by an Algorithm-Driven Core [0.0]
MusicAIR is an innovative AI music generation framework powered by a novel algorithm-driven symbolic music core.<n>The framework generates a complete melodic score solely from the lyrics.<n>GenAIM is a web tool using MusicAIR for lyric-to-song, text-to-music, and image-to-music generation.
arXiv Detail & Related papers (2025-11-21T15:43:27Z) - Music Flamingo: Scaling Music Understanding in Audio Language Models [98.94537017112704]
Music Flamingo is a novel large audio-language model designed to advance music understanding in foundational audio models.<n> MF-Skills is a dataset labeled through a multi-stage pipeline that yields rich captions and question-answer pairs covering harmony, structure, timbre, lyrics, and cultural context.<n>We introduce a post-training recipe: we first cold-start with MF-Think, a novel chain-of-thought dataset grounded in music theory, followed by GRPO-based reinforcement learning with custom rewards.
arXiv Detail & Related papers (2025-11-13T13:21:09Z) - Who Gets Heard? Rethinking Fairness in AI for Music Systems [27.73654834833813]
We raise concerns about cultural and genre biases in AI for music systems.<n>These biases affect stakeholders including creators, distributors, and listeners shaping representation in AI for music.<n>We offer recommendations at dataset, model and interface level in music-AI systems.
arXiv Detail & Related papers (2025-11-08T10:03:22Z) - The Ghost in the Keys: A Disklavier Demo for Human-AI Musical Co-Creativity [59.78509280246215]
Aria-Duet is an interactive system facilitating a real-time musical duet between a human pianist and Aria, a state-of-the-art generative model.<n>We analyze the system's output from a musicological perspective, finding the model can maintain stylistic semantics and develop coherent phrasal ideas.
arXiv Detail & Related papers (2025-11-03T15:26:01Z) - Detecting Musical Deepfakes [0.0]
This study investigates the detection of AI-generated songs using the FakeMusicCaps dataset.<n>To simulate real-world adversarial conditions, tempo stretching and pitch shifting were applied to the dataset.<n>Mel spectrograms were generated from the modified audio, then used to train and evaluate a convolutional neural network.
arXiv Detail & Related papers (2025-05-03T21:45:13Z) - Generating Mixcode Popular Songs with Artificial Intelligence: Concepts, Plans, and Speculations [0.0]
This paper discusses a proposed project integrating artificial intelligence and popular music.
The ultimate goal of the project is to create a powerful tool for implementing music for social transformation, education, healthcare, and emotional well-being.
arXiv Detail & Related papers (2024-11-10T10:49:13Z) - A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - Between the AI and Me: Analysing Listeners' Perspectives on AI- and Human-Composed Progressive Metal Music [1.2874569408514918]
We explore participants' perspectives on AI- vs human-generated progressive metal, using rock music as a control group.
We propose a mixed methods approach to assess the effects of generation type (human vs. AI), genre (progressive metal vs. rock), and curation process (random vs. cherry-picked)
Our findings validate the use of fine-tuning to achieve genre-specific specialization in AI music generation.
Despite some AI-generated excerpts receiving similar ratings to human music, listeners exhibited a preference for human compositions.
arXiv Detail & Related papers (2024-07-31T14:03:45Z) - Fairness Through Domain Awareness: Mitigating Popularity Bias For Music
Discovery [56.77435520571752]
We explore the intrinsic relationship between music discovery and popularity bias.
We propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems.
Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations.
arXiv Detail & Related papers (2023-08-28T14:12:25Z) - MARBLE: Music Audio Representation Benchmark for Universal Evaluation [79.25065218663458]
We introduce the Music Audio Representation Benchmark for universaL Evaluation, termed MARBLE.
It aims to provide a benchmark for various Music Information Retrieval (MIR) tasks by defining a comprehensive taxonomy with four hierarchy levels, including acoustic, performance, score, and high-level description.
We then establish a unified protocol based on 14 tasks on 8 public-available datasets, providing a fair and standard assessment of representations of all open-sourced pre-trained models developed on music recordings as baselines.
arXiv Detail & Related papers (2023-06-18T12:56:46Z) - A Review of Intelligent Music Generation Systems [4.287960539882345]
ChatGPT has significantly reduced the barrier to entry for non-professionals in creative endeavors.
Modern generative algorithms can extract patterns implicit in a piece of music based on rule constraints or a musical corpus.
arXiv Detail & Related papers (2022-11-16T13:43:16Z) - Co-creation and ownership for AI radio [1.2839524529089017]
We present Artificial$.!$fm, a proof-of-concept casual creator that blends AI-music generation, subjective ratings, and personalized recommendation.
We report on the design and development of Artificial$.!$fm, and provide a legal analysis on the ownership of artifacts generated on the platform.
arXiv Detail & Related papers (2022-06-01T13:35:03Z) - Quantized GAN for Complex Music Generation from Dance Videos [48.196705493763986]
We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal framework that generates musical samples conditioned on dance videos.
Our proposed framework takes dance video frames and human body motion as input, and learns to generate music samples that plausibly accompany the corresponding input.
arXiv Detail & Related papers (2022-04-01T17:53:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.