The Workflow as Medium: A Framework for Navigating Human-AI Co-Creation
- URL: http://arxiv.org/abs/2511.18182v1
- Date: Sat, 22 Nov 2025 20:36:13 GMT
- Title: The Workflow as Medium: A Framework for Navigating Human-AI Co-Creation
- Authors: Lee Ackerman,
- Abstract summary: The Creative Intelligence Loop (CIL) is a novel socio-technical framework for responsible human-AI co-creation.<n>The CIL proposes a disciplined structure for dynamic human-AI collaboration, guiding the strategic integration of diverse AI teammates.<n>The CIL was empirically demonstrated through the practice-led creation of two graphic novellas.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper introduces the Creative Intelligence Loop (CIL), a novel socio-technical framework for responsible human-AI co-creation. Rooted in the 'Workflow as Medium' paradigm, the CIL proposes a disciplined structure for dynamic human-AI collaboration, guiding the strategic integration of diverse AI teammates who function as collaborators while the human remains the final arbiter for ethical alignment and creative integrity. The CIL was empirically demonstrated through the practice-led creation of two graphic novellas, investigating how AI could serve as an effective creative colleague within a subjective medium lacking objective metrics. The process required navigating multifaceted challenges including AI's 'jagged frontier' of capabilities, sycophancy, and attention-scarce feedback environments. This prompted iterative refinement of teaming practices, yielding emergent strategies: a multi-faceted critique system integrating adversarial AI roles to counter sycophancy, and prioritizing 'feedback-ready' concrete artifacts to elicit essential human critique. The resulting graphic novellas analyze distinct socio-technical governance failures: 'The Steward' examines benevolent AI paternalism in smart cities, illustrating how algorithmic hubris can erode freedom; 'Fork the Vote' probes democratic legitimacy by comparing centralized AI opacity with emergent collusion in federated networks. This work contributes a self-improving framework for responsible human-AI co-creation and two graphic novellas designed to foster AI literacy and dialogue through accessible narrative analysis of AI's societal implications.
Related papers
- Humanizing AI Grading: Student-Centered Insights on Fairness, Trust, Consistency and Transparency [0.6138671548064355]
This study investigates students' perceptions of Artificial Intelligence (AI) grading systems in an undergraduate computer science course.<n>Findings reveal concerns about AI's lack of contextual understanding and personalization.<n>This work contributes to ethics-centered assessment practices by amplifying student voices and offering design principles for humanizing AI in designed learning environments.
arXiv Detail & Related papers (2026-02-08T01:18:10Z) - From Passive Tool to Socio-cognitive Teammate: A Conceptual Framework for Agentic AI in Human-AI Collaborative Learning [0.0]
We present a novel conceptual framework that charts the transition from AI as a tool to AI as a collaborative partner.<n>We examine whether an AI, lacking genuine consciousness or shared intentionality, can be considered a true collaborator.<n>This distinction has significant implications for pedagogy, instructional design, and the future research agenda for AI in education.
arXiv Detail & Related papers (2025-08-20T16:17:32Z) - Exploring Societal Concerns and Perceptions of AI: A Thematic Analysis through the Lens of Problem-Seeking [0.0]
This study introduces a novel conceptual framework distinguishing problem-seeking from problem-solving to clarify the unique features of human intelligence in contrast to AI.<n>The framework emphasizes that while AI excels at efficiency and optimization, it lacks the orientation derived from grounding and the embodiment flexibility intrinsic to human cognition.
arXiv Detail & Related papers (2025-05-29T18:24:34Z) - A Multi-Layered Research Framework for Human-Centered AI: Defining the Path to Explainability and Trust [2.4578723416255754]
Human-Centered AI (HCAI) emphasizes alignment with human values, while Explainable AI (XAI) enhances transparency by making AI decisions more understandable.<n>This paper presents a novel three-layered framework that bridges HCAI and XAI to establish a structured explainability paradigm.<n>Our findings advance Human-Centered Explainable AI (HCXAI), fostering AI systems that are transparent, adaptable, and ethically aligned.
arXiv Detail & Related papers (2025-04-14T01:29:30Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Responsible AI Implementation: A Human-centered Framework for
Accelerating the Innovation Process [0.8481798330936974]
This paper proposes a theoretical framework for responsible artificial intelligence (AI) implementation.
The proposed framework emphasizes a synergistic business technology approach for the agile co-creation process.
The framework emphasizes establishing and maintaining trust throughout the human-centered design and agile development of AI.
arXiv Detail & Related papers (2022-09-15T06:24:01Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.