The Creative Frontier of Generative AI: Managing the Novelty-Usefulness
Tradeoff
- URL: http://arxiv.org/abs/2306.03601v1
- Date: Tue, 6 Jun 2023 11:44:57 GMT
- Title: The Creative Frontier of Generative AI: Managing the Novelty-Usefulness
Tradeoff
- Authors: Anirban Mukherjee and Hannah Chang
- Abstract summary: We explore the optimal balance between novelty and usefulness in generative Artificial Intelligence (AI) systems.
Overemphasizing either aspect can lead to limitations such as hallucinations and memorization.
- Score: 0.4873362301533825
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this paper, drawing inspiration from the human creativity literature, we
explore the optimal balance between novelty and usefulness in generative
Artificial Intelligence (AI) systems. We posit that overemphasizing either
aspect can lead to limitations such as hallucinations and memorization.
Hallucinations, characterized by AI responses containing random inaccuracies or
falsehoods, emerge when models prioritize novelty over usefulness.
Memorization, where AI models reproduce content from their training data,
results from an excessive focus on usefulness, potentially limiting creativity.
To address these challenges, we propose a framework that includes
domain-specific analysis, data and transfer learning, user preferences and
customization, custom evaluation metrics, and collaboration mechanisms. Our
approach aims to generate content that is both novel and useful within specific
domains, while considering the unique requirements of various contexts.
Related papers
- VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning [86.59849798539312]
We present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations.
We show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
arXiv Detail & Related papers (2024-10-30T16:11:05Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Finetuning an LLM on Contextual Knowledge of Classics for Q&A [0.0]
This project is an attempt to merge the knowledge of Classics with the capabilities of artificial intelligence.
The goal of this project is to develop an LLM that not only reproduces contextual knowledge accurately but also exhibits a consistent "personality"
arXiv Detail & Related papers (2023-12-13T02:32:01Z) - Iterative Zero-Shot LLM Prompting for Knowledge Graph Construction [104.29108668347727]
This paper proposes an innovative knowledge graph generation approach that leverages the potential of the latest generative large language models.
The approach is conveyed in a pipeline that comprises novel iterative zero-shot and external knowledge-agnostic strategies.
We claim that our proposal is a suitable solution for scalable and versatile knowledge graph construction and may be applied to different and novel contexts.
arXiv Detail & Related papers (2023-07-03T16:01:45Z) - Challenges in creative generative models for music: a divergence
maximization perspective [3.655021726150369]
Development of generative Machine Learning models in creative practices is raising more interest among artists, practitioners and performers.
Most models are still unable to generate content that lay outside of the domain defined by the training dataset.
We propose an alternative prospective framework, starting from a new general formulation of ML objectives.
arXiv Detail & Related papers (2022-11-16T12:02:43Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Neurosymbolic AI for Situated Language Understanding [13.249453757295083]
We argue that computational situated grounding provides a solution to some of these learning challenges.
Our model reincorporates some ideas of classic AI into a framework of neurosymbolic intelligence.
We discuss how situated grounding provides diverse data and multiple levels of modeling for a variety of AI learning challenges.
arXiv Detail & Related papers (2020-12-05T05:03:28Z) - Creativity of Deep Learning: Conceptualization and Assessment [1.5738019181349994]
We use insights from computational creativity to conceptualize and assess current applications of generative deep learning in creative domains.
We highlight parallels between current systems and different models of human creativity as well as their shortcomings.
arXiv Detail & Related papers (2020-12-03T21:44:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.