Exploring the Needs of Practising Musicians in Co-Creative AI Through Co-Design
- URL: http://arxiv.org/abs/2502.09055v1
- Date: Thu, 13 Feb 2025 08:10:07 GMT
- Title: Exploring the Needs of Practising Musicians in Co-Creative AI Through Co-Design
- Authors: Stephen James Krol, Maria Teresa Llano Rodriguez, Miguel Loor Paredes,
- Abstract summary: We present a case study that explores the needs of practising musicians through the co-design of a musical variation system.
This was achieved through two workshops and a two week ecological evaluation, where musicians from different musical backgrounds offered valuable insights.
- Score: 0.27309692684728604
- License:
- Abstract: Recent advances in generative AI music have resulted in new technologies that are being framed as co-creative tools for musicians with early work demonstrating their potential to add to music practice. While the field has seen many valuable contributions, work that involves practising musicians in the design and development of these tools is limited, with the majority of work including them only once a tool has been developed. In this paper, we present a case study that explores the needs of practising musicians through the co-design of a musical variation system, highlighting the importance of involving a diverse range of musicians throughout the design process and uncovering various design insights. This was achieved through two workshops and a two week ecological evaluation, where musicians from different musical backgrounds offered valuable insights not only on a musical system's design but also on how a musical AI could be integrated into their musical practices.
Related papers
- Musical Agent Systems: MACAT and MACataRT [6.349140286855134]
We introduce MACAT and MACataRT, two distinct musical agent systems crafted to enhance interactive music-making between human musicians and AI.
MaCAT is optimized for agent-led performance, employing real-time synthesis and self-listening to shape its output autonomously.
MacataRT provides a flexible environment for collaborative improvisation through audio mosaicing and sequence-based learning.
arXiv Detail & Related papers (2025-01-19T22:04:09Z) - Tuning Music Education: AI-Powered Personalization in Learning Music [0.2046223849354785]
We present two case studies using such advances in music technology to address challenges in music education.
In our first case study we showcase an application that uses Automatic Chord Recognition to generate personalized exercises from audio tracks.
In the second case study we prototype adaptive piano method books that use Automatic Music Transcription to generate exercises at different skill levels.
arXiv Detail & Related papers (2024-12-18T05:25:42Z) - A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - Foundation Models for Music: A Survey [77.77088584651268]
Foundations models (FMs) have profoundly impacted diverse sectors, including music.
This comprehensive review examines state-of-the-art (SOTA) pre-trained models and foundation models in music.
arXiv Detail & Related papers (2024-08-26T15:13:14Z) - ComposerX: Multi-Agent Symbolic Music Composition with LLMs [51.68908082829048]
Music composition is a complex task that requires abilities to understand and generate information with long dependency and harmony constraints.
Current LLMs easily fail in this task, generating ill-written music even when equipped with modern techniques like In-Context-Learning and Chain-of-Thoughts.
We propose ComposerX, an agent-based symbolic music generation framework.
arXiv Detail & Related papers (2024-04-28T06:17:42Z) - Interactive Melody Generation System for Enhancing the Creativity of
Musicians [0.0]
This study proposes a system designed to enumerate the process of collaborative composition among humans.
By integrating multiple Recurrent Neural Network (RNN) models, the system provides an experience akin to collaborating with several composers.
arXiv Detail & Related papers (2024-03-06T01:33:48Z) - MusicAgent: An AI Agent for Music Understanding and Generation with
Large Language Models [54.55063772090821]
MusicAgent integrates numerous music-related tools and an autonomous workflow to address user requirements.
The primary goal of this system is to free users from the intricacies of AI-music tools, enabling them to concentrate on the creative aspect.
arXiv Detail & Related papers (2023-10-18T13:31:10Z) - The pop song generator: designing an online course to teach
collaborative, creative AI [1.2891210250935146]
This article describes and evaluates a new online AI-creativity course.
The course is based around three near-state-of-the-art AI models combined into a pop song generating system.
A fine-tuned GPT-2 model writes lyrics, Music-VAE composes musical scores and instrumentation and Diffsinger synthesises a singing voice.
arXiv Detail & Related papers (2023-06-15T18:17:28Z) - Redefining Relationships in Music [55.478320310047785]
We argue that AI tools will fundamentally reshape our music culture.
People working in this space could decrease the possible negative impacts on the practice, consumption and meaning of music.
arXiv Detail & Related papers (2022-12-13T19:44:32Z) - Proceedings of the 2nd International Workshop on Reading Music Systems [84.56633924613456]
The workshop tries to connect researchers who develop systems for reading music with other researchers and practitioners that could benefit from such systems.
The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition.
These are the proceedings of the 2nd International Workshop on Reading Music Systems, held in Delft on the 2nd of November 2019.
arXiv Detail & Related papers (2022-12-01T09:19:16Z) - Music Composition with Deep Learning: A Review [1.7188280334580197]
We analyze the ability of current Deep Learning models to generate music with creativity.
We compare these models to the music composition process from a theoretical point of view.
arXiv Detail & Related papers (2021-08-27T13:53:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.