Generating Mixcode Popular Songs with Artificial Intelligence: Concepts, Plans, and Speculations
- URL: http://arxiv.org/abs/2411.06420v1
- Date: Sun, 10 Nov 2024 10:49:13 GMT
- Title: Generating Mixcode Popular Songs with Artificial Intelligence: Concepts, Plans, and Speculations
- Authors: Abhishek Kaushik, Kayla Rush,
- Abstract summary: This paper discusses a proposed project integrating artificial intelligence and popular music.
The ultimate goal of the project is to create a powerful tool for implementing music for social transformation, education, healthcare, and emotional well-being.
- Score: 0.0
- License:
- Abstract: Music is a potent form of expression that can communicate, accentuate or even create the emotions of an individual or a collective. Both historically and in contemporary experiences, musical expression was and is commonly instrumentalized for social, political and/or economic purposes. Generative artificial intelligence provides a wealth of both opportunities and challenges with regard to music and its role in society. This paper discusses a proposed project integrating artificial intelligence and popular music, with the ultimate goal of creating a powerful tool for implementing music for social transformation, education, healthcare, and emotional well-being. Given that it is being presented at the outset of a collaboration between a computer scientist/data analyst and an ethnomusicologist/social anthropologist. it is mainly conceptual and somewhat speculative in nature.
Related papers
- A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - Are Words Enough? On the semantic conditioning of affective music
generation [1.534667887016089]
This scoping review aims to analyze and discuss the possibilities of music generation conditioned by emotions.
In detail, we review two main paradigms adopted in automatic music generation: rules-based and machine-learning models.
We conclude that overcoming the limitation and ambiguity of language to express emotions through music has the potential to impact the creative industries.
arXiv Detail & Related papers (2023-11-07T00:19:09Z) - Redefining Relationships in Music [55.478320310047785]
We argue that AI tools will fundamentally reshape our music culture.
People working in this space could decrease the possible negative impacts on the practice, consumption and meaning of music.
arXiv Detail & Related papers (2022-12-13T19:44:32Z) - A Survey on Artificial Intelligence for Music Generation: Agents,
Domains and Perspectives [10.349825060515181]
We describe how humans compose music and how new AI systems could imitate such process.
To understand how AI models and algorithms generate music, we explore, analyze and describe the agents that take part of the music generation process.
arXiv Detail & Related papers (2022-10-25T11:54:30Z) - Affective Idiosyncratic Responses to Music [63.969810774018775]
We develop methods to measure affective responses to music from over 403M listener comments on a Chinese social music platform.
We test for musical, lyrical, contextual, demographic, and mental health effects that drive listener affective responses.
arXiv Detail & Related papers (2022-10-17T19:57:46Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Music Harmony Generation, through Deep Learning and Using a
Multi-Objective Evolutionary Algorithm [0.0]
This paper introduces a genetic multi-objective evolutionary optimization algorithm for the generation of polyphonic music.
One of the goals is the rules and regulations of music, which, along with the other two goals, including the scores of music experts and ordinary listeners, fits the cycle of evolution to get the most optimal response.
The results show that the proposed method is able to generate difficult and pleasant pieces with desired styles and lengths, along with harmonic sounds that follow the grammar while attracting the listener, at the same time.
arXiv Detail & Related papers (2021-02-16T05:05:54Z) - Adaptive music: Automated music composition and distribution [0.0]
We present Melomics: an algorithmic composition method based on evolutionary search.
The system has exhibited a high creative power and versatility to produce music of different types.
It has also enabled the emergence of a set of completely novel applications.
arXiv Detail & Related papers (2020-07-25T09:38:06Z) - Artificial Musical Intelligence: A Survey [51.477064918121336]
Music has become an increasingly prevalent domain of machine learning and artificial intelligence research.
This article provides a definition of musical intelligence, introduces a taxonomy of its constituent components, and surveys the wide range of AI methods that can be, and have been, brought to bear in its pursuit.
arXiv Detail & Related papers (2020-06-17T04:46:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.