Music-triggered fashion design: from songs to the metaverse
- URL: http://arxiv.org/abs/2410.04921v1
- Date: Mon, 7 Oct 2024 11:09:45 GMT
- Title: Music-triggered fashion design: from songs to the metaverse
- Authors: Martina Delgado, Marta Llopart, Eva Sarabia, Sandra Taboada, Pol Vierge, Fernando Vilariño, Joan Moya Kohler, Julieta Grimberg Golijov, Matías Bilkis,
- Abstract summary: We analyze how virtual realities can help to broaden the opportunities for musicians to bridge with their audiences.
We present our first steps towards re-defining musical experiences in the metaverse.
- Score: 32.73124984242397
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The advent of increasingly-growing virtual realities poses unprecedented opportunities and challenges to different societies. Artistic collectives are not an exception, and we here aim to put special attention into musicians. Compositions, lyrics and even show-advertisements are constituents of a message that artists transmit about their reality. As such, artistic creations are ultimately linked to feelings and emotions, with aesthetics playing a crucial role when it comes to transmit artist's intentions. In this context, we here analyze how virtual realities can help to broaden the opportunities for musicians to bridge with their audiences, by devising a dynamical fashion-design recommendation system inspired by sound stimulus. We present our first steps towards re-defining musical experiences in the metaverse, opening up alternative opportunities for artists to connect both with real and virtual (\textit{e.g.} machine-learning agents operating in the metaverse) in potentially broader ways.
Related papers
- A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - Bridging Paintings and Music -- Exploring Emotion based Music Generation through Paintings [10.302353984541497]
This research develops a model capable of generating music that resonates with the emotions depicted in visual arts.
Addressing the scarcity of aligned art and music data, we curated the Emotion Painting Music dataset.
Our dual-stage framework converts images to text descriptions of emotional content and then transforms these descriptions into music, facilitating efficient learning with minimal data.
arXiv Detail & Related papers (2024-09-12T08:19:25Z) - Emotion Manipulation Through Music -- A Deep Learning Interactive Visual Approach [0.0]
We introduce a novel way to manipulate the emotional content of a song using AI tools.
Our goal is to achieve the desired emotion while leaving the original melody as intact as possible.
This research may contribute to on-demand custom music generation, the automated remixing of existing work, and music playlists tuned for emotional progression.
arXiv Detail & Related papers (2024-06-12T20:12:29Z) - MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models [57.47799823804519]
We are inspired by how musicians compose music not just from a movie script, but also through visualizations.
We propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music.
Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music.
arXiv Detail & Related papers (2024-06-07T06:38:59Z) - Digital Life Project: Autonomous 3D Characters with Social Intelligence [86.2845109451914]
Digital Life Project is a framework utilizing language as the universal medium to build autonomous 3D characters.
Our framework comprises two primary components: SocioMind and MoMat-MoGen.
arXiv Detail & Related papers (2023-12-07T18:58:59Z) - Beyond Reality: The Pivotal Role of Generative AI in the Metaverse [98.1561456565877]
This paper offers a comprehensive exploration of how generative AI technologies are shaping the Metaverse.
We delve into the applications of text generation models like ChatGPT and GPT-3, which are enhancing conversational interfaces with AI-generated characters.
We also examine the potential of 3D model generation technologies like Point-E and Lumirithmic in creating realistic virtual objects.
arXiv Detail & Related papers (2023-07-28T05:44:20Z) - Flat latent manifolds for music improvisation between human and machine [9.571383193449648]
We consider a music-generating algorithm as a counterpart to a human musician, in a setting where reciprocal improvisation is to lead to new experiences.
In the learned model, we generate novel musical sequences by quantification in latent space.
We provide empirical evidence for our method via a set of experiments on music and we deploy our model for an interactive jam session with a professional drummer.
arXiv Detail & Related papers (2022-02-23T09:00:17Z) - Artificial Intelligence for the Metaverse: A Survey [66.57225253532748]
We first deliver a preliminary of AI, including machine learning algorithms and deep learning architectures, and its role in the metaverse.
We then convey a comprehensive investigation of AI-based methods concerning six technical aspects that have potentials for the metaverse.
Several AI-aided applications, such as healthcare, manufacturing, smart cities, and gaming, are studied to be deployed in the virtual worlds.
arXiv Detail & Related papers (2022-02-15T03:34:56Z) - When Creators Meet the Metaverse: A Survey on Computational Arts [19.409136374448675]
This article conducts a comprehensive survey on computational arts, describing novel artworks in blended virtual-physical realities.
Several remarkable types of novel creations in the expanded horizons of metaverse cyberspace have been reflected.
We propose several research agendas: democratising computational arts, digital privacy, and safety for metaverse artists, ownership recognition for digital artworks, technological challenges, and so on.
arXiv Detail & Related papers (2021-11-26T13:24:37Z) - A Human-Computer Duet System for Music Performance [7.777761975348974]
We create a virtual violinist who can collaborate with a human pianist to perform chamber music automatically without any intervention.
The system incorporates the techniques from various fields, including real-time music tracking, pose estimation, and body movement generation.
The proposed system has been validated in public concerts.
arXiv Detail & Related papers (2020-09-16T17:19:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.