The Creative Frontier of Generative AI: Managing the Novelty-Usefulness
Tradeoff
- URL: http://arxiv.org/abs/2306.03601v1
- Date: Tue, 6 Jun 2023 11:44:57 GMT
- Title: The Creative Frontier of Generative AI: Managing the Novelty-Usefulness
Tradeoff
- Authors: Anirban Mukherjee and Hannah Chang
- Abstract summary: We explore the optimal balance between novelty and usefulness in generative Artificial Intelligence (AI) systems.
Overemphasizing either aspect can lead to limitations such as hallucinations and memorization.
- Score: 0.4873362301533825
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this paper, drawing inspiration from the human creativity literature, we
explore the optimal balance between novelty and usefulness in generative
Artificial Intelligence (AI) systems. We posit that overemphasizing either
aspect can lead to limitations such as hallucinations and memorization.
Hallucinations, characterized by AI responses containing random inaccuracies or
falsehoods, emerge when models prioritize novelty over usefulness.
Memorization, where AI models reproduce content from their training data,
results from an excessive focus on usefulness, potentially limiting creativity.
To address these challenges, we propose a framework that includes
domain-specific analysis, data and transfer learning, user preferences and
customization, custom evaluation metrics, and collaboration mechanisms. Our
approach aims to generate content that is both novel and useful within specific
domains, while considering the unique requirements of various contexts.
Related papers
- Untapped Potential in Self-Optimization of Hopfield Networks: The Creativity of Unsupervised Learning [0.6144680854063939]
We argue that the Self-Optimization (SO) model satisfies the necessary and sufficient conditions of a creative process.
We conclude that the SO model allows for simulating and understanding the emergence of creative behaviors in artificial systems that learn.
arXiv Detail & Related papers (2024-12-10T11:58:39Z) - Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice [186.055899073629]
Unlearning is often invoked as a solution for removing the effects of targeted information from a generative-AI model.
Unlearning is also proposed as a way to prevent a model from generating targeted types of information in its outputs.
Both of these goals--the targeted removal of information from a model and the targeted suppression of information from a model's outputs--present various technical and substantive challenges.
arXiv Detail & Related papers (2024-12-09T20:18:43Z) - Alien Recombination: Exploring Concept Blends Beyond Human Cognitive Availability in Visual Art [90.8684263806649]
We show how AI can transcend human cognitive limitations in visual art creation.
Our research hypothesizes that visual art contains a vast unexplored space of conceptual combinations.
We present the Alien Recombination method to identify and generate concept combinations that lie beyond human cognitive availability.
arXiv Detail & Related papers (2024-11-18T11:55:38Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Finetuning an LLM on Contextual Knowledge of Classics for Q&A [0.0]
This project is an attempt to merge the knowledge of Classics with the capabilities of artificial intelligence.
The goal of this project is to develop an LLM that not only reproduces contextual knowledge accurately but also exhibits a consistent "personality"
arXiv Detail & Related papers (2023-12-13T02:32:01Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Neurosymbolic AI for Situated Language Understanding [13.249453757295083]
We argue that computational situated grounding provides a solution to some of these learning challenges.
Our model reincorporates some ideas of classic AI into a framework of neurosymbolic intelligence.
We discuss how situated grounding provides diverse data and multiple levels of modeling for a variety of AI learning challenges.
arXiv Detail & Related papers (2020-12-05T05:03:28Z) - Creativity of Deep Learning: Conceptualization and Assessment [1.5738019181349994]
We use insights from computational creativity to conceptualize and assess current applications of generative deep learning in creative domains.
We highlight parallels between current systems and different models of human creativity as well as their shortcomings.
arXiv Detail & Related papers (2020-12-03T21:44:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.