AI-rays: Exploring Bias in the Gaze of AI Through a Multimodal Interactive Installation
- URL: http://arxiv.org/abs/2410.03786v1
- Date: Thu, 3 Oct 2024 18:44:05 GMT
- Title: AI-rays: Exploring Bias in the Gaze of AI Through a Multimodal Interactive Installation
- Authors: Ziyao Gao, Yiwen Zhang, Ling Li, Theodoros Papatheodorou, Wei Zeng,
- Abstract summary: We introduce AI-rays, an interactive installation where AI generates speculative identities from participants' appearance.
It uses speculative X-ray visions to contrast reality with AI-generated assumptions, metaphorically highlighting AI's scrutiny and biases.
- Score: 7.939652622988465
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Data surveillance has become more covert and pervasive with AI algorithms, which can result in biased social classifications. Appearance offers intuitive identity signals, but what does it mean to let AI observe and speculate on them? We introduce AI-rays, an interactive installation where AI generates speculative identities from participants' appearance which are expressed through synthesized personal items placed in participants' bags. It uses speculative X-ray visions to contrast reality with AI-generated assumptions, metaphorically highlighting AI's scrutiny and biases. AI-rays promotes discussions on modern surveillance and the future of human-machine reality through a playful, immersive experience exploring AI biases.
Related papers
- Navigating AI Fallibility: Examining People's Reactions and Perceptions of AI after Encountering Personality Misrepresentations [7.256711790264119]
Hyper-personalized AI systems profile people's characteristics to provide personalized recommendations.
These systems are not immune to errors when making inferences about people's most personal traits.
We present two studies to examine how people react and perceive AI after encountering personality misrepresentations.
arXiv Detail & Related papers (2024-05-25T21:27:15Z) - AIGCs Confuse AI Too: Investigating and Explaining Synthetic Image-induced Hallucinations in Large Vision-Language Models [37.04195231708092]
We highlight the exacerbated hallucination phenomena in Large Vision-Language Models (LVLMs) caused by AI-synthetic images.
Remarkably, our findings shed light on a consistent AIGC textbfhallucination bias: the object hallucinations induced by synthetic images are characterized by a greater quantity.
Our investigations on Q-former and Linear projector reveal that synthetic images may present token deviations after visual projection, thereby amplifying the hallucination bias.
arXiv Detail & Related papers (2024-03-13T13:56:34Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - AI-Generated Images as Data Source: The Dawn of Synthetic Era [61.879821573066216]
generative AI has unlocked the potential to create synthetic images that closely resemble real-world photographs.
This paper explores the innovative concept of harnessing these AI-generated images as new data sources.
In contrast to real data, AI-generated data exhibit remarkable advantages, including unmatched abundance and scalability.
arXiv Detail & Related papers (2023-10-03T06:55:19Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - eXtended Artificial Intelligence: New Prospects of Human-AI Interaction
Research [8.315174426992087]
The article provides a theoretical treatment and model of human-AI interaction based on an XR-AI continuum.
It shows why the combination of XR and AI fruitfully contributes to a valid and systematic investigation of human-AI interactions and interfaces.
The first experiment reveals an interesting gender effect in human-robot interaction, while the second experiment reveals an Eliza effect of a recommender system.
arXiv Detail & Related papers (2021-03-27T22:12:06Z) - A Survey of Embodied AI: From Simulators to Research Tasks [13.923234397344487]
An emerging paradigm shift from the era of "internet AI" to "embodied AI"
This paper comprehensively surveys state-of-the-art embodied AI simulators and research.
arXiv Detail & Related papers (2021-03-08T17:31:19Z) - Player-AI Interaction: What Neural Network Games Reveal About AI as Play [14.63311356668699]
This paper argues that games are an ideal domain for studying and experimenting with how humans interact with AI.
Through a systematic survey of neural network games, we identified the dominant interaction metaphors and AI interaction patterns.
Our work suggests that game and UX designers should consider flow to structure the learning curve of human-AI interaction.
arXiv Detail & Related papers (2021-01-15T17:07:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.