Amplifying Human Creativity and Problem Solving with AI Through Generative Collective Intelligence
- URL: http://arxiv.org/abs/2505.19167v2
- Date: Wed, 04 Jun 2025 18:36:56 GMT
- Title: Amplifying Human Creativity and Problem Solving with AI Through Generative Collective Intelligence
- Authors: Thomas P. Kehler, Scott E. Page, Alex Pentland, Martin Reeves, John Seely Brown,
- Abstract summary: We propose a general framework for human-AI collaboration that amplifies the capabilities of both types of intelligence.<n>We refer to this as Generative Collective Intelligence (GCI)<n>GCI employs AI in dual roles: as interactive agents and as technology that accumulates, organizes, and leverages knowledge.
- Score: 3.160501875040035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a general framework for human-AI collaboration that amplifies the distinct capabilities of both types of intelligence. We refer to this as Generative Collective Intelligence (GCI). GCI employs AI in dual roles: as interactive agents and as technology that accumulates, organizes, and leverages knowledge. In this second role, AI creates a cognitive bridge between human reasoning and AI models. The AI functions as a social and cultural technology that enables groups to solve complex problems through structured collaboration that transcends traditional communication barriers. We argue that GCI can overcome limitations of purely algorithmic approaches to problem-solving and decision-making. We describe the mathematical foundations of GCI, based on the law of comparative judgment and minimum regret principles, and briefly illustrate its applications across various domains, including climate adaptation, healthcare transformation, and civic participation. By combining human creativity with AI's computational capabilities, GCI offers a promising approach to addressing complex societal challenges that neither humans nor machines can solve alone.
Related papers
- AI Flow: Perspectives, Scenarios, and Approaches [51.38621621775711]
We introduce AI Flow, a framework that integrates cutting-edge IT and CT advancements.<n>First, device-edge-cloud framework serves as the foundation, which integrates end devices, edge servers, and cloud clusters.<n>Second, we introduce the concept of familial models, which refers to a series of different-sized models with aligned hidden features.<n>Third, connectivity- and interaction-based intelligence emergence is a novel paradigm of AI Flow.
arXiv Detail & Related papers (2025-06-14T12:43:07Z) - Unraveling Human-AI Teaming: A Review and Outlook [2.3396455015352258]
Artificial Intelligence (AI) is advancing at an unprecedented pace, with clear potential to enhance decision-making and productivity.<n>Yet, the collaborative decision-making process between humans and AI remains underdeveloped, often falling short of its transformative possibilities.<n>This paper explores the evolution of AI agents from passive tools to active collaborators in human-AI teams, emphasizing their ability to learn, adapt, and operate autonomously in complex environments.
arXiv Detail & Related papers (2025-04-08T07:37:25Z) - Augmenting Minds or Automating Skills: The Differential Role of Human Capital in Generative AI's Impact on Creative Tasks [4.39919134458872]
Generative AI is rapidly reshaping creative work, raising critical questions about its beneficiaries and societal implications.<n>This study challenges prevailing assumptions by exploring how generative AI interacts with diverse forms of human capital in creative tasks.<n>While AI democratizes access to creative tools, it simultaneously amplifies cognitive inequalities.
arXiv Detail & Related papers (2024-12-05T08:27:14Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Problem Solving Through Human-AI Preference-Based Cooperation [74.39233146428492]
We propose HAICo2, a novel human-AI co-construction framework.<n>We take first steps towards a formalization of HAICo2 and discuss the difficult open research problems that it faces.
arXiv Detail & Related papers (2024-08-14T11:06:57Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Incentive Compatibility for AI Alignment in Sociotechnical Systems:
Positions and Prospects [11.086872298007835]
Existing methodologies primarily focus on technical facets, often neglecting the intricate sociotechnical nature of AI systems.
We posit a new problem worth exploring: Incentive Compatibility Sociotechnical Alignment Problem (ICSAP)
We discuss three classical game problems for achieving IC: mechanism design, contract theory, and Bayesian persuasion, in addressing the perspectives, potentials, and challenges of solving ICSAP.
arXiv Detail & Related papers (2024-02-20T10:52:57Z) - Advancing Explainable AI Toward Human-Like Intelligence: Forging the
Path to Artificial Brain [0.7770029179741429]
The intersection of Artificial Intelligence (AI) and neuroscience in Explainable AI (XAI) is pivotal for enhancing transparency and interpretability in complex decision-making processes.
This paper explores the evolution of XAI methodologies, ranging from feature-based to human-centric approaches.
The challenges in achieving explainability in generative models, ensuring responsible AI practices, and addressing ethical implications are discussed.
arXiv Detail & Related papers (2024-02-07T14:09:11Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.