Towards creativity characterization of generative models via group-based
subset scanning
- URL: http://arxiv.org/abs/2104.00479v1
- Date: Thu, 1 Apr 2021 14:07:49 GMT
- Title: Towards creativity characterization of generative models via group-based
subset scanning
- Authors: Celia Cintas, Payel Das, Brian Quanz, Skyler Speakman, Victor
Akinwande, Pin-Yu Chen
- Abstract summary: We propose group-based subset scanning to quantify, detect, and characterize creative processes.
Creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets.
- Score: 51.84144826134919
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep generative models, such as Variational Autoencoders (VAEs), have been
employed widely in computational creativity research. However, such models
discourage out-of-distribution generation to avoid spurious sample generation,
limiting their creativity. Thus, incorporating research on human creativity
into generative deep learning techniques presents an opportunity to make their
outputs more compelling and human-like. As we see the emergence of generative
models directed to creativity research, a need for machine learning-based
surrogate metrics to characterize creative output from these models is
imperative. We propose group-based subset scanning to quantify, detect, and
characterize creative processes by detecting a subset of anomalous
node-activations in the hidden layers of generative models. Our experiments on
original, typically decoded, and "creatively decoded" (Das et al 2020) image
datasets reveal that the proposed subset scores distribution is more useful for
detecting creative processes in the activation space rather than the pixel
space. Further, we found that creative samples generate larger subsets of
anomalies than normal or non-creative samples across datasets. The node
activations highlighted during the creative decoding process are different from
those responsible for normal sample generation.
Related papers
- Creativity Has Left the Chat: The Price of Debiasing Language Models [1.223779595809275]
We investigate the unintended consequences of Reinforcement Learning from Human Feedback on the creativity of Large Language Models (LLMs)
Our findings have significant implications for marketers who rely on LLMs for creative tasks such as copywriting, ad creation, and customer persona generation.
arXiv Detail & Related papers (2024-06-08T22:14:51Z) - Creative Beam Search: LLM-as-a-Judge For Improving Response Generation [2.4555276449137042]
We propose a method called Creative Beam Search that uses Diverse Beam Search and LLM-as-a-Judge to perform response generation and response validation.
The results of a qualitative experiment show how our approach can provide better output than standard sampling techniques.
arXiv Detail & Related papers (2024-04-30T18:00:02Z) - Generative Active Learning for Image Synthesis Personalization [57.01364199734464]
This paper explores the application of active learning, traditionally studied in the context of discriminative models, to generative models.
The primary challenge in conducting active learning on generative models lies in the open-ended nature of querying.
We introduce the concept of anchor directions to transform the querying process into a semi-open problem.
arXiv Detail & Related papers (2024-03-22T06:45:45Z) - Towards Creativity Characterization of Generative Models via Group-based
Subset Scanning [64.6217849133164]
We propose group-based subset scanning to identify, quantify, and characterize creative processes.
We find that creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets.
arXiv Detail & Related papers (2022-03-01T15:07:14Z) - Regularising Inverse Problems with Generative Machine Learning Models [9.971351129098336]
We consider the use of generative models in a variational regularisation approach to inverse problems.
The success of generative regularisers depends on the quality of the generative model.
We show that the success of solutions restricted to lie exactly in the range of the generator is highly dependent on the ability of the generative model.
arXiv Detail & Related papers (2021-07-22T15:47:36Z) - Active Divergence with Generative Deep Learning -- A Survey and Taxonomy [0.6435984242701043]
We present a taxonomy and comprehensive survey of the state of the art of active divergence techniques.
We highlight the potential for computational creativity researchers to advance these methods and use deep generative models in truly creative systems.
arXiv Detail & Related papers (2021-07-12T17:29:28Z) - Open Set Recognition with Conditional Probabilistic Generative Models [51.40872765917125]
We propose Conditional Probabilistic Generative Models (CPGM) for open set recognition.
CPGM can detect unknown samples but also classify known classes by forcing different latent features to approximate conditional Gaussian distributions.
Experiment results on multiple benchmark datasets reveal that the proposed method significantly outperforms the baselines.
arXiv Detail & Related papers (2020-08-12T06:23:49Z) - Reverse Engineering Configurations of Neural Text Generation Models [86.9479386959155]
The study of artifacts that emerge in machine generated text as a result of modeling choices is a nascent research area.
We conduct an extensive suite of diagnostic tests to observe whether modeling choices leave detectable artifacts in the text they generate.
Our key finding, which is backed by a rigorous set of experiments, is that such artifacts are present and that different modeling choices can be inferred by observing the generated text alone.
arXiv Detail & Related papers (2020-04-13T21:02:44Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.