Feeling Guilty Being a c(ai)borg: Navigating the Tensions Between Guilt and Empowerment in AI Use
- URL: http://arxiv.org/abs/2506.00094v1
- Date: Fri, 30 May 2025 10:33:04 GMT
- Title: Feeling Guilty Being a c(ai)borg: Navigating the Tensions Between Guilt and Empowerment in AI Use
- Authors: Konstantin Aal, Tanja Aal, Vasil Navumau, David Unbehaun, Claudia Müller, Volker Wulf, Sarah Rüller,
- Abstract summary: This paper explores the concept of feeling guilty as a 'c(ai)borg' - a human augmented by AI.<n>The c(ai)borg vision advocates for a future where AI is openly embraced as a collaborative partner.
- Score: 7.0825752078233615
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper explores the emotional, ethical and practical dimensions of integrating Artificial Intelligence (AI) into personal and professional workflows, focusing on the concept of feeling guilty as a 'c(ai)borg' - a human augmented by AI. Inspired by Donna Haraway's Cyborg Manifesto, the study explores how AI challenges traditional notions of creativity, originality and intellectual labour. Using an autoethnographic approach, the authors reflect on their year-long experiences with AI tools, revealing a transition from initial guilt and reluctance to empowerment through skill-building and transparency. Key findings highlight the importance of basic academic skills, advanced AI literacy and honest engagement with AI results. The c(ai)borg vision advocates for a future where AI is openly embraced as a collaborative partner, fostering innovation and equity while addressing issues of access and agency. By reframing guilt as growth, the paper calls for a thoughtful and inclusive approach to AI integration.
Related papers
- A Conceptual Exploration of Generative AI-Induced Cognitive Dissonance and its Emergence in University-Level Academic Writing [0.0]
This work explores how Generative Artificial Intelligence (GenAI) serves as both a trigger and amplifier of cognitive dissonance (CD)<n>We introduce a hypothetical construct of GenAI-induced CD, illustrating the tension between AI-driven efficiency and the principles of originality, effort, and intellectual ownership.<n>We discuss strategies to mitigate this dissonance, including reflective pedagogy, AI literacy programs, transparency in GenAI use, and discipline-specific task redesigns.
arXiv Detail & Related papers (2025-02-08T21:31:04Z) - Augmenting Minds or Automating Skills: The Differential Role of Human Capital in Generative AI's Impact on Creative Tasks [4.39919134458872]
Generative AI is rapidly reshaping creative work, raising critical questions about its beneficiaries and societal implications.<n>This study challenges prevailing assumptions by exploring how generative AI interacts with diverse forms of human capital in creative tasks.<n>While AI democratizes access to creative tools, it simultaneously amplifies cognitive inequalities.
arXiv Detail & Related papers (2024-12-05T08:27:14Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
AI technology can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals.<n>The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment.<n>A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Untangling Critical Interaction with AI in Students Written Assessment [2.8078480738404]
Key challenge exists in ensuring that humans are equipped with the required critical thinking and AI literacy skills.
This paper provides a first step toward conceptualizing the notion of critical learner interaction with AI.
Using both theoretical models and empirical data, our preliminary findings suggest a general lack of Deep interaction with AI during the writing process.
arXiv Detail & Related papers (2024-04-10T12:12:50Z) - Advancing Explainable AI Toward Human-Like Intelligence: Forging the
Path to Artificial Brain [0.7770029179741429]
The intersection of Artificial Intelligence (AI) and neuroscience in Explainable AI (XAI) is pivotal for enhancing transparency and interpretability in complex decision-making processes.
This paper explores the evolution of XAI methodologies, ranging from feature-based to human-centric approaches.
The challenges in achieving explainability in generative models, ensuring responsible AI practices, and addressing ethical implications are discussed.
arXiv Detail & Related papers (2024-02-07T14:09:11Z) - Agency and legibility for artists through Experiential AI [12.941266914933454]
Experiential AI is an emerging research field that addresses the challenge of making AI tangible and explicit.
We report on an empirical case study of an experiential AI system designed for creative data exploration.
We discuss how experiential AI can increase legibility and agency for artists.
arXiv Detail & Related papers (2023-06-04T11:00:07Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.