Who Gets Heard? Rethinking Fairness in AI for Music Systems
- URL: http://arxiv.org/abs/2511.05953v1
- Date: Sat, 08 Nov 2025 10:03:22 GMT
- Title: Who Gets Heard? Rethinking Fairness in AI for Music Systems
- Authors: Atharva Mehta, Shivam Chauhan, Megha Sharma, Gus Xia, Kaustuv Kanti Ganguli, Nishanth Chandran, Zeerak Talat, Monojit Choudhury,
- Abstract summary: We raise concerns about cultural and genre biases in AI for music systems.<n>These biases affect stakeholders including creators, distributors, and listeners shaping representation in AI for music.<n>We offer recommendations at dataset, model and interface level in music-AI systems.
- Score: 27.73654834833813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, the music research community has examined risks of AI models for music, with generative AI models in particular, raised concerns about copyright, deepfakes, and transparency. In our work, we raise concerns about cultural and genre biases in AI for music systems (music-AI systems) which affect stakeholders including creators, distributors, and listeners shaping representation in AI for music. These biases can misrepresent marginalized traditions, especially from the Global South, producing inauthentic outputs (e.g., distorted ragas) that reduces creators' trust on these systems. Such harms risk reinforcing biases, limiting creativity, and contributing to cultural erasure. To address this, we offer recommendations at dataset, model and interface level in music-AI systems.
Related papers
- Can Media Act as a Soft Regulator of Safe AI Development? A Game Theoretical Analysis [57.68073583427415]
We study whether media coverage has the potential to push AI creators into the production of safe products.<n>Our results reveal that media is indeed able to foster cooperation between creators and users, but not always.<n>By shaping public perception and holding developers accountable, media emerges as a powerful soft regulator.
arXiv Detail & Related papers (2025-09-02T12:13:34Z) - Opening Musical Creativity? Embedded Ideologies in Generative-AI Music Systems [2.532202013576547]
We look at four generative-AI music making systems available to the public as of mid-2025.<n>We investigate ideologies that are driving the early-stage development and adoption of generative-AI in music making.
arXiv Detail & Related papers (2025-08-12T09:59:07Z) - Detecting Musical Deepfakes [0.0]
This study investigates the detection of AI-generated songs using the FakeMusicCaps dataset.<n>To simulate real-world adversarial conditions, tempo stretching and pitch shifting were applied to the dataset.<n>Mel spectrograms were generated from the modified audio, then used to train and evaluate a convolutional neural network.
arXiv Detail & Related papers (2025-05-03T21:45:13Z) - Reducing Barriers to the Use of Marginalised Music Genres in AI [7.140590440016289]
This project aims to explore the eXplainable AI (XAI) challenges and opportunities associated with reducing barriers to using marginalised genres of music with AI models.
XAI opportunities identified included topics of improving transparency and control of AI models, explaining the ethics and bias of AI models, fine tuning large models with small datasets to reduce bias, and explaining style-transfer opportunities with AI models.
We are now building on this project to bring together a global International Responsible AI Music community and invite people to join our network.
arXiv Detail & Related papers (2024-07-18T12:10:04Z) - Computational Copyright: Towards A Royalty Model for Music Generative AI [8.131016672512835]
generative AI has given rise to pressing copyright challenges, especially within the music industry.
This paper focuses on the economic aspects of these challenges, emphasizing that the economic impact constitutes a central issue in the copyright arena.
We propose viable royalty models for revenue sharing on AI music generation platforms.
arXiv Detail & Related papers (2023-12-11T18:57:20Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Redefining Relationships in Music [55.478320310047785]
We argue that AI tools will fundamentally reshape our music culture.
People working in this space could decrease the possible negative impacts on the practice, consumption and meaning of music.
arXiv Detail & Related papers (2022-12-13T19:44:32Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Prote\c{c}\~ao intelectual de obras produzidas por sistemas baseados em
intelig\^encia artificial: uma vis\~ao tecnicista sobre o tema [0.0]
The pervasiveness of Artificial Intelligence (AI) is unquestionable in our society. Even in the arts, AI is present.
This essay aims to contribute with a technicist view on the discussion of copyright applicability from works produced by AI.
arXiv Detail & Related papers (2022-05-11T12:07:47Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.