Musical composition and 2D cellular automata based on music intervals
- URL: http://arxiv.org/abs/2411.19844v1
- Date: Fri, 29 Nov 2024 17:03:41 GMT
- Title: Musical composition and 2D cellular automata based on music intervals
- Authors: Igor Lugo, Martha G. Alatriste-Contreras,
- Abstract summary: The aim of this study was to explore alternatives uses for a cellular automaton in the musical context.
We used the complex systems and humanities approaches as a framework for capturing the essence of creating music.
- Score: 0.0
- License:
- Abstract: This study is a theoretical approach for exploring the applicability of a 2D cellular automaton based on melodic and harmonic intervals in random arrays of musical notes. The aim of this study was to explore alternatives uses for a cellular automaton in the musical context for better understanding the musical creativity. We used the complex systems and humanities approaches as a framework for capturing the essence of creating music based on rules of music theory. Findings suggested that such rules matter for generating large-scale patterns of organized notes. Therefore, our formulation provides a novel approach for understanding and replicating aspects of the musical creativity.
Related papers
- Foundation Models for Music: A Survey [77.77088584651268]
Foundations models (FMs) have profoundly impacted diverse sectors, including music.
This comprehensive review examines state-of-the-art (SOTA) pre-trained models and foundation models in music.
arXiv Detail & Related papers (2024-08-26T15:13:14Z) - ComposerX: Multi-Agent Symbolic Music Composition with LLMs [51.68908082829048]
Music composition is a complex task that requires abilities to understand and generate information with long dependency and harmony constraints.
Current LLMs easily fail in this task, generating ill-written music even when equipped with modern techniques like In-Context-Learning and Chain-of-Thoughts.
We propose ComposerX, an agent-based symbolic music generation framework.
arXiv Detail & Related papers (2024-04-28T06:17:42Z) - Structuring Concept Space with the Musical Circle of Fifths by Utilizing Music Grammar Based Activations [0.0]
We explore the intriguing similarities between the structure of a discrete neural network, such as a spiking network, and the composition of a piano piece.
We propose a novel approach that leverages musical grammar to regulate activations in a spiking neural network.
We show that the map of concepts in our model is structured by the musical circle of fifths, highlighting the potential for leveraging music theory principles in deep learning algorithms.
arXiv Detail & Related papers (2024-02-22T03:28:25Z) - MeloForm: Generating Melody with Musical Form based on Expert Systems
and Neural Networks [146.59245563763065]
MeloForm is a system that generates melody with musical form using expert systems and neural networks.
It can support various kinds of forms, such as verse and chorus form, rondo form, variational form, sonata form, etc.
arXiv Detail & Related papers (2022-08-30T15:44:15Z) - Music Composition with Deep Learning: A Review [1.7188280334580197]
We analyze the ability of current Deep Learning models to generate music with creativity.
We compare these models to the music composition process from a theoretical point of view.
arXiv Detail & Related papers (2021-08-27T13:53:53Z) - Music Harmony Generation, through Deep Learning and Using a
Multi-Objective Evolutionary Algorithm [0.0]
This paper introduces a genetic multi-objective evolutionary optimization algorithm for the generation of polyphonic music.
One of the goals is the rules and regulations of music, which, along with the other two goals, including the scores of music experts and ordinary listeners, fits the cycle of evolution to get the most optimal response.
The results show that the proposed method is able to generate difficult and pleasant pieces with desired styles and lengths, along with harmonic sounds that follow the grammar while attracting the listener, at the same time.
arXiv Detail & Related papers (2021-02-16T05:05:54Z) - A framework to compare music generative models using automatic
evaluation metrics extended to rhythm [69.2737664640826]
This paper takes the framework proposed in a previous research that did not consider rhythm to make a series of design decisions, then, rhythm support is added to evaluate the performance of two RNN memory cells in the creation of monophonic music.
The model considers the handling of music transposition and the framework evaluates the quality of the generated pieces using automatic quantitative metrics based on geometry which have rhythm support added as well.
arXiv Detail & Related papers (2021-01-19T15:04:46Z) - Sequence Generation using Deep Recurrent Networks and Embeddings: A
study case in music [69.2737664640826]
This paper evaluates different types of memory mechanisms (memory cells) and analyses their performance in the field of music composition.
A set of quantitative metrics is presented to evaluate the performance of the proposed architecture automatically.
arXiv Detail & Related papers (2020-12-02T14:19:19Z) - Exploring Inherent Properties of the Monophonic Melody of Songs [10.055143995729415]
We propose a set of interpretable features on monophonic melody for computational purposes.
These features are defined not only in mathematical form, but also with some considerations on composers 'intuition.
These features are considered by people universally in many genres of songs, even for atonal composition practices.
arXiv Detail & Related papers (2020-03-20T14:13:16Z) - RL-Duet: Online Music Accompaniment Generation Using Deep Reinforcement
Learning [69.20460466735852]
This paper presents a deep reinforcement learning algorithm for online accompaniment generation.
The proposed algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part.
arXiv Detail & Related papers (2020-02-08T03:53:52Z) - Continuous Melody Generation via Disentangled Short-Term Representations
and Structural Conditions [14.786601824794369]
We present a model for composing melodies given a user specified symbolic scenario combined with a previous music context.
Our model is capable of generating long melodies by regarding 8-beat note sequences as basic units, and shares consistent rhythm pattern structure with another specific song.
Results show that the music generated by our model tends to have salient repetition structures, rich motives, and stable rhythm patterns.
arXiv Detail & Related papers (2020-02-05T06:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.