A computational model and tool for generating more novel opportunities in professional innovation processes
- URL: http://arxiv.org/abs/2510.20402v1
- Date: Thu, 23 Oct 2025 10:09:57 GMT
- Title: A computational model and tool for generating more novel opportunities in professional innovation processes
- Authors: Neil Maiden, Konstantinos Zachos, James Lockerbie, Kostas Petrianakis, Amanda Brown,
- Abstract summary: This paper presents a new computational model of creative outcomes, informed by creativity theories and techniques.<n>The model was evaluated using opportunities generated for an innovation project in the hospitality sector.
- Score: 1.3194391758295112
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents a new computational model of creative outcomes, informed by creativity theories and techniques, which was implemented to generate more novel opportunities for innovation projects. The model implemented five functions that were developed to contribute to the generation of innovation opportunities with higher novelty without loss of usefulness. The model was evaluated using opportunities generated for an innovation project in the hospitality sector. The evaluation revealed that the computational model generated outcomes that were more novel and/or useful than outcomes from Notebook LM and ChatGPT4o. However, not all model functions contributed to the generation of more novel opportunities, leading to new directions for further model development
Related papers
- Cooking Up Creativity: Enhancing LLM Creativity through Structured Recombination [46.79423188943526]
We introduce a novel approach that enhances Large Language Models (LLMs) creativity.<n>We apply LLMs for translating between natural language and structured representations, and perform the core creative leap.<n>We demonstrate our approach in the culinary domain with DishCOVER, a model that generates creative recipes.
arXiv Detail & Related papers (2025-04-29T11:13:06Z) - Unlocking the Potential of Past Research: Using Generative AI to Reconstruct Healthcare Simulation Models [0.0]
This study explores the feasibility of using generative artificial intelligence (AI) to recreate published models using Free and Open Source Software (FOSS)<n>We successfully generated, tested and internally reproduced two DES models, including user interfaces.<n>The reported results were replicated for one model, but not the other, likely due to missing information on distributions.
arXiv Detail & Related papers (2025-03-27T16:10:02Z) - Generative Models in Decision Making: A Survey [63.68746774576147]
generative models can be incorporated into decision-making systems by generating trajectories that guide agents toward high-reward state-action regions or intermediate sub-goals.<n>This paper presents a comprehensive review of the application of generative models in decision-making tasks.
arXiv Detail & Related papers (2025-02-24T12:31:28Z) - GAI: Generative Agents for Innovation [3.176387928678296]
This study examines whether collective reasoning among generative agents can facilitate novel and coherent thinking that leads to innovation.<n>It proposes GAI, a new LLM-empowered framework designed for reflection and interaction among multiple generative agents.
arXiv Detail & Related papers (2024-12-25T13:20:10Z) - Untapped Potential in Self-Optimization of Hopfield Networks: The Creativity of Unsupervised Learning [0.6144680854063939]
We argue that the Self-Optimization (SO) model satisfies the necessary and sufficient conditions of a creative process.<n>We show that learning is needed to find creative outcomes above chance probability.
arXiv Detail & Related papers (2024-12-10T11:58:39Z) - Can AI Be as Creative as Humans? [84.43873277557852]
We prove in theory that AI can be as creative as humans under the condition that it can properly fit the data generated by human creators.
The debate on AI's creativity is reduced into the question of its ability to fit a sufficient amount of data.
arXiv Detail & Related papers (2024-01-03T08:49:12Z) - SciMON: Scientific Inspiration Machines Optimized for Novelty [68.46036589035539]
We explore and enhance the ability of neural language models to generate novel scientific directions grounded in literature.
We take a dramatic departure with a novel setting in which models use as input background contexts.
We present SciMON, a modeling framework that uses retrieval of "inspirations" from past scientific papers.
arXiv Detail & Related papers (2023-05-23T17:12:08Z) - Challenges in creative generative models for music: a divergence
maximization perspective [3.655021726150369]
Development of generative Machine Learning models in creative practices is raising more interest among artists, practitioners and performers.
Most models are still unable to generate content that lay outside of the domain defined by the training dataset.
We propose an alternative prospective framework, starting from a new general formulation of ML objectives.
arXiv Detail & Related papers (2022-11-16T12:02:43Z) - Towards Creativity Characterization of Generative Models via Group-based
Subset Scanning [64.6217849133164]
We propose group-based subset scanning to identify, quantify, and characterize creative processes.
We find that creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets.
arXiv Detail & Related papers (2022-03-01T15:07:14Z) - Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning [65.268245109828]
In data-rich domains such as vision, language, and speech, deep learning prevails to deliver high-performance task-specific models.
Deep learning in resource-limited domains still faces multiple challenges including (i) limited data, (ii) constrained model development cost, and (iii) lack of adequate pre-trained models for effective finetuning.
Model reprogramming enables resource-efficient cross-domain machine learning by repurposing a well-developed pre-trained model from a source domain to solve tasks in a target domain without model finetuning.
arXiv Detail & Related papers (2022-02-22T02:33:54Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Towards creativity characterization of generative models via group-based
subset scanning [51.84144826134919]
We propose group-based subset scanning to quantify, detect, and characterize creative processes.
Creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets.
arXiv Detail & Related papers (2021-04-01T14:07:49Z) - Modeling, Visualization, and Analysis of African Innovation Performance [0.0]
We discuss the concepts and emergence of Innovation Performance, and how to quantify it, primarily working with data from the Global Innovation Index.
We briefly overview existing literature on using machine learning for modeling innovation performance, and use simple machine learning techniques, to analyze and predict the "Mobile App Creation Indicator" from the Global Innovation Index.
arXiv Detail & Related papers (2020-08-18T12:16:10Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.