The Ramon Llull's Thinking Machine for Automated Ideation
- URL: http://arxiv.org/abs/2508.19200v3
- Date: Wed, 03 Sep 2025 16:14:12 GMT
- Title: The Ramon Llull's Thinking Machine for Automated Ideation
- Authors: Xinran Zhao, Boyuan Zheng, Chenglei Si, Haofei Yu, Ken Liu, Runlong Zhou, Ruochen Li, Tong Chen, Xiang Li, Yiming Zhang, Tongshuang Wu,
- Abstract summary: Our approach defines three compositional axes: Theme, Domain, and Method.<n>We show that prompting LLMs with curated combinations produces research ideas that are diverse, relevant, and grounded in current literature.<n>This modern thinking machine offers a lightweight, interpretable tool for augmenting scientific creativity and suggests a path toward collaborative ideation between humans and AI.
- Score: 52.06378166909451
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper revisits Ramon Llull's Ars combinatoria - a medieval framework for generating knowledge through symbolic recombination - as a conceptual foundation for building a modern Llull's thinking machine for research ideation. Our approach defines three compositional axes: Theme (e.g., efficiency, adaptivity), Domain (e.g., question answering, machine translation), and Method (e.g., adversarial training, linear attention). These elements represent high-level abstractions common in scientific work - motivations, problem settings, and technical approaches - and serve as building blocks for LLM-driven exploration. We mine elements from human experts or conference papers and show that prompting LLMs with curated combinations produces research ideas that are diverse, relevant, and grounded in current literature. This modern thinking machine offers a lightweight, interpretable tool for augmenting scientific creativity and suggests a path toward collaborative ideation between humans and AI.
Related papers
- Alien Science: Sampling Coherent but Cognitively Unavailable Research Directions from Idea Atoms [53.907293349123506]
Large language models often fail at producing ideas that are both coherent and non-obvious to the current community.<n>We formalize this gap through cognitive availability, the likelihood that a research direction would be naturally proposed by a typical researcher.<n>We learn two complementary models: a coherence model that scores whether a set of atoms constitutes a viable direction, and an availability model that scores how likely that direction is to be generated.
arXiv Detail & Related papers (2026-03-01T13:05:19Z) - Deep Ideation: Designing LLM Agents to Generate Novel Research Ideas on Scientific Concept Network [9.317340414316446]
We propose a framework to integrate a scientific network that captures keyword co-occurrence and contextual relationships.<n>A critic engine, trained on real-world reviewer feedback, guides the process by providing continuous feedback on the novelty and feasibility of ideas.<n>Our approach improves the quality of generated ideas by 10.67% compared to other methods, with ideas surpassing top conference acceptance levels.
arXiv Detail & Related papers (2025-11-04T04:00:20Z) - The Budget AI Researcher and the Power of RAG Chains [4.797627592793464]
Current approaches to supporting research idea generation often rely on generic large language models (LLMs)<n>Our framework, The Budget AI Researcher, uses retrieval-augmented generation chains, vector databases, and topic-guided pairing to recombine concepts from hundreds of machine learning papers.<n>The system ingests papers from nine major AI conferences, which collectively span the vast subfields of machine learning, and organizes them into a hierarchical topic tree.
arXiv Detail & Related papers (2025-06-14T02:40:35Z) - Probing and Inducing Combinational Creativity in Vision-Language Models [52.76981145923602]
Recent advances in Vision-Language Models (VLMs) have sparked debate about whether their outputs reflect combinational creativity.<n>We propose the Identification-Explanation-Implication (IEI) framework, which decomposes creative processes into three levels.<n>To validate this framework, we curate CreativeMashup, a high-quality dataset of 666 artist-generated visual mashups annotated according to the IEI framework.
arXiv Detail & Related papers (2025-04-17T17:38:18Z) - LLMs can Realize Combinatorial Creativity: Generating Creative Ideas via LLMs for Scientific Research [5.564972490390789]
We present a framework that explicitly implements creativity theory using Large Language Models (LLMs)<n>The framework features a generalization-level retrieval system for cross-domain knowledge discovery and a structured process for idea generation.<n>Experiments on the OAG-Bench dataset demonstrate our framework's effectiveness, consistently outperforming baseline approaches in generating ideas that align with real research developments.
arXiv Detail & Related papers (2024-12-18T18:41:14Z) - Chain of Ideas: Revolutionizing Research Via Novel Idea Development with LLM Agents [64.64280477958283]
An exponential increase in scientific literature makes it challenging for researchers to stay current with recent advances and identify meaningful research directions.
Recent developments in large language models(LLMs) suggest a promising avenue for automating the generation of novel research ideas.
We propose a Chain-of-Ideas(CoI) agent, an LLM-based agent that organizes relevant literature in a chain structure to effectively mirror the progressive development in a research domain.
arXiv Detail & Related papers (2024-10-17T03:26:37Z) - Many Heads Are Better Than One: Improved Scientific Idea Generation by A LLM-Based Multi-Agent System [62.832818186789545]
Virtual Scientists (VirSci) is a multi-agent system designed to mimic the teamwork inherent in scientific research.<n>VirSci organizes a team of agents to collaboratively generate, evaluate, and refine research ideas.<n>We show that this multi-agent approach outperforms the state-of-the-art method in producing novel scientific ideas.
arXiv Detail & Related papers (2024-10-12T07:16:22Z) - Scideator: Human-LLM Scientific Idea Generation Grounded in Research-Paper Facet Recombination [23.48126633604684]
Scideator is a mixed-initiative ideation tool based on large language models.<n>It allows users to explore the idea space by interactively recombining facets from scientific papers.<n>It also helps users gauge idea originality by searching the literature for overlaps and assessing idea novelty.
arXiv Detail & Related papers (2024-09-23T00:09:34Z) - Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - Beyond Words: A Mathematical Framework for Interpreting Large Language
Models [8.534513717370434]
Large language models (LLMs) are powerful AI tools that can generate and comprehend natural language text and other complex information.
We propose Hex a framework that clarifies key terms and concepts in LLM research, such as hallucinations, alignment, self-verification and chain-of-thought reasoning.
We argue that our formal definitions and results are crucial for advancing the discussion on how to build generative AI systems.
arXiv Detail & Related papers (2023-11-06T11:13:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.