Untapped Potential in Self-Optimization of Hopfield Networks: The Creativity of Unsupervised Learning
- URL: http://arxiv.org/abs/2501.04007v1
- Date: Tue, 10 Dec 2024 11:58:39 GMT
- Title: Untapped Potential in Self-Optimization of Hopfield Networks: The Creativity of Unsupervised Learning
- Authors: Natalya Weber, Christian Guckelsberger, Tom Froese,
- Abstract summary: We argue that the Self-Optimization (SO) model satisfies the necessary and sufficient conditions of a creative process.
We conclude that the SO model allows for simulating and understanding the emergence of creative behaviors in artificial systems that learn.
- Score: 0.6144680854063939
- License:
- Abstract: The Self-Optimization (SO) model can be considered as the third operational mode of the classical Hopfield Network (HN), leveraging the power of associative memory to enhance optimization performance. Moreover, is has been argued to express characteristics of minimal agency which, together with its biological plausibility, renders it useful for the study of artificial life. In this article, we draw attention to another facet of the SO model: its capacity for creativity. Drawing on the creativity studies literature, we argue that the model satisfies the necessary and sufficient conditions of a creative process. Moreover, we explore the dependency of different creative outcomes based on learning parameters, specifically the learning and reset rates. We conclude that the SO model allows for simulating and understanding the emergence of creative behaviors in artificial systems that learn.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - On the Modeling Capabilities of Large Language Models for Sequential Decision Making [52.128546842746246]
Large pretrained models are showing increasingly better performance in reasoning and planning tasks.
We evaluate their ability to produce decision-making policies, either directly, by generating actions, or indirectly.
In environments with unfamiliar dynamics, we explore how fine-tuning LLMs with synthetic data can significantly improve their reward modeling capabilities.
arXiv Detail & Related papers (2024-10-08T03:12:57Z) - Creativity Has Left the Chat: The Price of Debiasing Language Models [1.223779595809275]
We investigate the unintended consequences of Reinforcement Learning from Human Feedback on the creativity of Large Language Models (LLMs)
Our findings have significant implications for marketers who rely on LLMs for creative tasks such as copywriting, ad creation, and customer persona generation.
arXiv Detail & Related papers (2024-06-08T22:14:51Z) - Verbalized Probabilistic Graphical Modeling with Large Language Models [8.961720262676195]
This work introduces a novel Bayesian prompting approach that facilitates training-free Bayesian inference with large language models.
Our results indicate that the model effectively enhances confidence elicitation and text generation quality, demonstrating its potential to improve AI language understanding systems.
arXiv Detail & Related papers (2024-06-08T16:35:31Z) - Creativity and Markov Decision Processes [0.20482269513546453]
We identify formal mappings between Boden's process theory of creativity and Markov Decision Processes (MDPs)
We study three out of eleven mappings in detail to understand which types of creative processes, opportunities foraberrations, and threats to creativity (uninspiration) could be observed in an MDP.
We conclude by discussing quality criteria for the selection of such mappings for future work and applications.
arXiv Detail & Related papers (2024-05-23T18:16:42Z) - Can AI Be as Creative as Humans? [84.43873277557852]
We prove in theory that AI can be as creative as humans under the condition that it can properly fit the data generated by human creators.
The debate on AI's creativity is reduced into the question of its ability to fit a sufficient amount of data.
arXiv Detail & Related papers (2024-01-03T08:49:12Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - The Creative Frontier of Generative AI: Managing the Novelty-Usefulness
Tradeoff [0.4873362301533825]
We explore the optimal balance between novelty and usefulness in generative Artificial Intelligence (AI) systems.
Overemphasizing either aspect can lead to limitations such as hallucinations and memorization.
arXiv Detail & Related papers (2023-06-06T11:44:57Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Challenges in creative generative models for music: a divergence
maximization perspective [3.655021726150369]
Development of generative Machine Learning models in creative practices is raising more interest among artists, practitioners and performers.
Most models are still unable to generate content that lay outside of the domain defined by the training dataset.
We propose an alternative prospective framework, starting from a new general formulation of ML objectives.
arXiv Detail & Related papers (2022-11-16T12:02:43Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.