Untapped Potential in Self-Optimization of Hopfield Networks: The Creativity of Unsupervised Learning
- URL: http://arxiv.org/abs/2501.04007v2
- Date: Fri, 14 Mar 2025 05:04:53 GMT
- Title: Untapped Potential in Self-Optimization of Hopfield Networks: The Creativity of Unsupervised Learning
- Authors: Natalya Weber, Christian Guckelsberger, Tom Froese,
- Abstract summary: We argue that the Self-Optimization (SO) model satisfies the necessary and sufficient conditions of a creative process.<n>We show that learning is needed to find creative outcomes above chance probability.
- Score: 0.6144680854063939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Self-Optimization (SO) model can be considered as the third operational mode of the classical Hopfield Network, leveraging the power of associative memory to enhance optimization performance. Moreover, it has been argued to express characteristics of minimal agency, which renders it useful for the study of artificial life. In this article, we draw attention to another facet of the SO model: its capacity for creativity. Drawing on creativity studies, we argue that the model satisfies the necessary and sufficient conditions of a creative process. Moreover, we show that learning is needed to find creative outcomes above chance probability. Furthermore, we demonstrate that modifying the learning parameters in the SO model gives rise to four different regimes that can account for both creative products and inconclusive outcomes, thus providing a framework for studying and understanding the emergence of creative behaviors in artificial systems that learn.
Related papers
- Cooking Up Creativity: A Cognitively-Inspired Approach for Enhancing LLM Creativity through Structured Representations [53.950760059792614]
Large Language Models (LLMs) excel at countless tasks, yet struggle with creativity.
We introduce a novel approach that couples LLMs with structured representations and cognitively inspired manipulations to generate more creative and diverse ideas.
We demonstrate our approach in the culinary domain with DishCOVER, a model that generates creative recipes.
arXiv Detail & Related papers (2025-04-29T11:13:06Z) - Probing and Inducing Combinational Creativity in Vision-Language Models [52.76981145923602]
Recent advances in Vision-Language Models (VLMs) have sparked debate about whether their outputs reflect combinational creativity.
We propose the Identification-Explanation-Implication (IEI) framework, which decomposes creative processes into three levels.
To validate this framework, we curate CreativeMashup, a high-quality dataset of 666 artist-generated visual mashups annotated according to the IEI framework.
arXiv Detail & Related papers (2025-04-17T17:38:18Z) - Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - On the Modeling Capabilities of Large Language Models for Sequential Decision Making [52.128546842746246]
Large pretrained models are showing increasingly better performance in reasoning and planning tasks.
We evaluate their ability to produce decision-making policies, either directly, by generating actions, or indirectly.
In environments with unfamiliar dynamics, we explore how fine-tuning LLMs with synthetic data can significantly improve their reward modeling capabilities.
arXiv Detail & Related papers (2024-10-08T03:12:57Z) - Benchmarking Language Model Creativity: A Case Study on Code Generation [39.546827184857754]
In this work, we introduce a framework for quantifying LLM creativity.
We define NEOGAUGE, a metric that quantifies both convergent and divergent thinking in the generated creative responses.
We test the proposed framework on Codeforces problems, which serve as both a natural dataset for coding tasks and a collection of prior human solutions.
arXiv Detail & Related papers (2024-07-12T05:55:22Z) - Creativity Has Left the Chat: The Price of Debiasing Language Models [1.223779595809275]
We investigate the unintended consequences of Reinforcement Learning from Human Feedback on the creativity of Large Language Models (LLMs)
Our findings have significant implications for marketers who rely on LLMs for creative tasks such as copywriting, ad creation, and customer persona generation.
arXiv Detail & Related papers (2024-06-08T22:14:51Z) - Creativity and Markov Decision Processes [0.20482269513546453]
We identify formal mappings between Boden's process theory of creativity and Markov Decision Processes (MDPs)
We study three out of eleven mappings in detail to understand which types of creative processes, opportunities foraberrations, and threats to creativity (uninspiration) could be observed in an MDP.
We conclude by discussing quality criteria for the selection of such mappings for future work and applications.
arXiv Detail & Related papers (2024-05-23T18:16:42Z) - Can AI Be as Creative as Humans? [84.43873277557852]
We prove in theory that AI can be as creative as humans under the condition that it can properly fit the data generated by human creators.
The debate on AI's creativity is reduced into the question of its ability to fit a sufficient amount of data.
arXiv Detail & Related papers (2024-01-03T08:49:12Z) - The Creative Frontier of Generative AI: Managing the Novelty-Usefulness
Tradeoff [0.4873362301533825]
We explore the optimal balance between novelty and usefulness in generative Artificial Intelligence (AI) systems.
Overemphasizing either aspect can lead to limitations such as hallucinations and memorization.
arXiv Detail & Related papers (2023-06-06T11:44:57Z) - UniDiff: Advancing Vision-Language Models with Generative and
Discriminative Learning [86.91893533388628]
This paper presents UniDiff, a unified multi-modal model that integrates image-text contrastive learning (ITC), text-conditioned image synthesis learning (IS), and reciprocal semantic consistency modeling (RSC)
UniDiff demonstrates versatility in both multi-modal understanding and generative tasks.
arXiv Detail & Related papers (2023-06-01T15:39:38Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Challenges in creative generative models for music: a divergence
maximization perspective [3.655021726150369]
Development of generative Machine Learning models in creative practices is raising more interest among artists, practitioners and performers.
Most models are still unable to generate content that lay outside of the domain defined by the training dataset.
We propose an alternative prospective framework, starting from a new general formulation of ML objectives.
arXiv Detail & Related papers (2022-11-16T12:02:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.