Aspirational Affordances of AI
- URL: http://arxiv.org/abs/2504.15469v1
- Date: Mon, 21 Apr 2025 22:37:49 GMT
- Title: Aspirational Affordances of AI
- Authors: Sina Fazelpour, Meica Magnani,
- Abstract summary: There are growing concerns about how artificial intelligence systems may confine individuals and groups to static or restricted narratives about who or what they could be.<n>We introduce the concept of aspirational affordance to describe how culturally shared interpretive resources can shape individual cognition.<n>We show how this concept can ground productive evaluations of the risks of AI-enabled representations and narratives.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As artificial intelligence systems increasingly permeate processes of cultural and epistemic production, there are growing concerns about how their outputs may confine individuals and groups to static or restricted narratives about who or what they could be. In this paper, we advance the discourse surrounding these concerns by making three contributions. First, we introduce the concept of aspirational affordance to describe how culturally shared interpretive resources can shape individual cognition, and in particular exercises practical imagination. We show how this concept can ground productive evaluations of the risks of AI-enabled representations and narratives. Second, we provide three reasons for scrutinizing of AI's influence on aspirational affordances: AI's influence is potentially more potent, but less public than traditional sources; AI's influence is not simply incremental, but ecological, transforming the entire landscape of cultural and epistemic practices that traditionally shaped aspirational affordances; and AI's influence is highly concentrated, with a few corporate-controlled systems mediating a growing portion of aspirational possibilities. Third, to advance such a scrutiny, we introduce the concept of aspirational harm, which, in the context of AI systems, arises when AI-enabled aspirational affordances distort or diminish available interpretive resources in ways that undermine individuals' ability to imagine relevant practical possibilities and alternative futures. Through three case studies, we illustrate how aspirational harms extend the existing discourse on AI-inflicted harms beyond representational and allocative harms, warranting separate attention. Through these conceptual resources and analyses, this paper advances understanding of the psychological and societal stakes of AI's role in shaping individual and collective aspirations.
Related papers
- A Multi-Layered Research Framework for Human-Centered AI: Defining the Path to Explainability and Trust [2.4578723416255754]
Human-Centered AI (HCAI) emphasizes alignment with human values, while Explainable AI (XAI) enhances transparency by making AI decisions more understandable.<n>This paper presents a novel three-layered framework that bridges HCAI and XAI to establish a structured explainability paradigm.<n>Our findings advance Human-Centered Explainable AI (HCXAI), fostering AI systems that are transparent, adaptable, and ethically aligned.
arXiv Detail & Related papers (2025-04-14T01:29:30Z) - We Are All Creators: Generative AI, Collective Knowledge, and the Path Towards Human-AI Synergy [1.2499537119440245]
Generative AI presents a profound challenge to traditional notions of human uniqueness.<n>Fueled by neural network based foundation models, these systems demonstrate remarkable content generation capabilities.<n>This paper argues that generative AI represents an alternative form of intelligence and creativity.
arXiv Detail & Related papers (2025-04-10T17:50:17Z) - AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.<n>The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - How Performance Pressure Influences AI-Assisted Decision Making [57.53469908423318]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Toward an Artist-Centred AI [0.0]
This paper contextualizes the notions of suitability and desirability of principles, practices, and tools related to the use of AI in the arts.
It was composed by examining the challenges that AI poses to art production, distribution, consumption, and monetization.
arXiv Detail & Related papers (2024-04-13T09:43:23Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.