Interactive Melody Generation System for Enhancing the Creativity of
Musicians
- URL: http://arxiv.org/abs/2403.03395v1
- Date: Wed, 6 Mar 2024 01:33:48 GMT
- Title: Interactive Melody Generation System for Enhancing the Creativity of
Musicians
- Authors: So Hirawata and Noriko Otani
- Abstract summary: This study proposes a system designed to enumerate the process of collaborative composition among humans.
By integrating multiple Recurrent Neural Network (RNN) models, the system provides an experience akin to collaborating with several composers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study proposes a system designed to enumerate the process of
collaborative composition among humans, using automatic music composition
technology. By integrating multiple Recurrent Neural Network (RNN) models, the
system provides an experience akin to collaborating with several composers,
thereby fostering diverse creativity. Through dynamic adaptation to the user's
creative intentions, based on feedback, the system enhances its capability to
generate melodies that align with user preferences and creative needs. The
system's effectiveness was evaluated through experiments with composers of
varying backgrounds, revealing its potential to facilitate musical creativity
and suggesting avenues for further refinement. The study underscores the
importance of interaction between the composer and AI, aiming to make music
composition more accessible and personalized. This system represents a step
towards integrating AI into the creative process, offering a new tool for
composition support and collaborative artistic exploration.
Related papers
- A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - Creativity and Visual Communication from Machine to Musician: Sharing a Score through a Robotic Camera [4.9485163144728235]
This paper explores the integration of visual communication and musical interaction by implementing a robotic camera within a "Guided Harmony" musical game.
The robotic system interprets and responds to nonverbal cues from musicians, creating a collaborative and adaptive musical experience.
arXiv Detail & Related papers (2024-09-09T16:34:36Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - ComposerX: Multi-Agent Symbolic Music Composition with LLMs [51.68908082829048]
Music composition is a complex task that requires abilities to understand and generate information with long dependency and harmony constraints.
Current LLMs easily fail in this task, generating ill-written music even when equipped with modern techniques like In-Context-Learning and Chain-of-Thoughts.
We propose ComposerX, an agent-based symbolic music generation framework.
arXiv Detail & Related papers (2024-04-28T06:17:42Z) - ByteComposer: a Human-like Melody Composition Method based on Language
Model Agent [11.792129708566598]
Large Language Models (LLM) have shown encouraging progress in multimodal understanding and generation tasks.
We propose ByteComposer, an agent framework emulating a human's creative pipeline in four separate steps.
We conduct extensive experiments on GPT4 and several open-source large language models, which substantiate our framework's effectiveness.
arXiv Detail & Related papers (2024-02-24T04:35:07Z) - CreativeSynth: Creative Blending and Synthesis of Visual Arts based on
Multimodal Diffusion [74.44273919041912]
Large-scale text-to-image generative models have made impressive strides, showcasing their ability to synthesize a vast array of high-quality images.
However, adapting these models for artistic image editing presents two significant challenges.
We build the innovative unified framework Creative Synth, which is based on a diffusion model with the ability to coordinate multimodal inputs.
arXiv Detail & Related papers (2024-01-25T10:42:09Z) - MusicAgent: An AI Agent for Music Understanding and Generation with
Large Language Models [54.55063772090821]
MusicAgent integrates numerous music-related tools and an autonomous workflow to address user requirements.
The primary goal of this system is to free users from the intricacies of AI-music tools, enabling them to concentrate on the creative aspect.
arXiv Detail & Related papers (2023-10-18T13:31:10Z) - Expressive Communication: A Common Framework for Evaluating Developments
in Generative Models and Steering Interfaces [1.2891210250935146]
This study investigates how developments in both models and user interfaces are important for empowering co-creation.
In an evaluation study with 26 composers creating 100+ pieces of music and listeners providing 1000+ head-to-head comparisons, we find that more expressive models and more steerable interfaces are important.
arXiv Detail & Related papers (2021-11-29T20:57:55Z) - Music Composition with Deep Learning: A Review [1.7188280334580197]
We analyze the ability of current Deep Learning models to generate music with creativity.
We compare these models to the music composition process from a theoretical point of view.
arXiv Detail & Related papers (2021-08-27T13:53:53Z) - Generating Music and Generative Art from Brain activity [0.0]
This research work introduces a computational system for creating generative art using a Brain-Computer Interface (BCI)
The generated artwork uses brain signals and concepts of geometry, color and spatial location to give complexity to the autonomous construction.
arXiv Detail & Related papers (2021-08-09T19:33:45Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.