Towards Creativity Characterization of Generative Models via Group-based
Subset Scanning
- URL: http://arxiv.org/abs/2203.00523v2
- Date: Thu, 3 Mar 2022 08:38:47 GMT
- Title: Towards Creativity Characterization of Generative Models via Group-based
Subset Scanning
- Authors: Celia Cintas, Payel Das, Brian Quanz, Girmaw Abebe Tadesse, Skyler
Speakman, Pin-Yu Chen
- Abstract summary: We propose group-based subset scanning to identify, quantify, and characterize creative processes.
We find that creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets.
- Score: 64.6217849133164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep generative models, such as Variational Autoencoders (VAEs) and
Generative Adversarial Networks (GANs), have been employed widely in
computational creativity research. However, such models discourage
out-of-distribution generation to avoid spurious sample generation, thereby
limiting their creativity. Thus, incorporating research on human creativity
into generative deep learning techniques presents an opportunity to make their
outputs more compelling and human-like. As we see the emergence of generative
models directed toward creativity research, a need for machine learning-based
surrogate metrics to characterize creative output from these models is
imperative. We propose group-based subset scanning to identify, quantify, and
characterize creative processes by detecting a subset of anomalous
node-activations in the hidden layers of the generative models. Our experiments
on the standard image benchmarks, and their "creatively generated" variants,
reveal that the proposed subset scores distribution is more useful for
detecting creative processes in the activation space rather than the pixel
space. Further, we found that creative samples generate larger subsets of
anomalies than normal or non-creative samples across datasets. The node
activations highlighted during the creative decoding process are different from
those responsible for the normal sample generation. Lastly, we assess if the
images from the subsets selected by our method were also found creative by
human evaluators, presenting a link between creativity perception in humans and
node activations within deep neural nets.
Related papers
- Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations [52.11801730860999]
In recent years, the robot learning community has shown increasing interest in using deep generative models to capture the complexity of large datasets.
We present the different types of models that the community has explored, such as energy-based models, diffusion models, action value maps, or generative adversarial networks.
We also present the different types of applications in which deep generative models have been used, from grasp generation to trajectory generation or cost learning.
arXiv Detail & Related papers (2024-08-08T11:34:31Z) - Creativity Has Left the Chat: The Price of Debiasing Language Models [1.223779595809275]
We investigate the unintended consequences of Reinforcement Learning from Human Feedback on the creativity of Large Language Models (LLMs)
Our findings have significant implications for marketers who rely on LLMs for creative tasks such as copywriting, ad creation, and customer persona generation.
arXiv Detail & Related papers (2024-06-08T22:14:51Z) - Generative Active Learning for Image Synthesis Personalization [57.01364199734464]
This paper explores the application of active learning, traditionally studied in the context of discriminative models, to generative models.
The primary challenge in conducting active learning on generative models lies in the open-ended nature of querying.
We introduce the concept of anchor directions to transform the querying process into a semi-open problem.
arXiv Detail & Related papers (2024-03-22T06:45:45Z) - Generator Born from Classifier [66.56001246096002]
We aim to reconstruct an image generator, without relying on any data samples.
We propose a novel learning paradigm, in which the generator is trained to ensure that the convergence conditions of the network parameters are satisfied.
arXiv Detail & Related papers (2023-12-05T03:41:17Z) - Prototype Generation: Robust Feature Visualisation for Data Independent
Interpretability [1.223779595809275]
Prototype Generation is a stricter and more robust form of feature visualisation for model-agnostic, data-independent interpretability of image classification models.
We demonstrate its ability to generate inputs that result in natural activation paths, countering previous claims that feature visualisation algorithms are untrustworthy due to the unnatural internal activations.
arXiv Detail & Related papers (2023-09-29T11:16:06Z) - An Unsupervised Way to Understand Artifact Generating Internal Units in
Generative Neural Networks [19.250873974729817]
We propose the concept of local activation to detect artifact generations without additional supervision.
We empirically verify that our approach can detect and correct artifact generations from GANs with various datasets.
arXiv Detail & Related papers (2021-12-16T11:59:26Z) - Active Divergence with Generative Deep Learning -- A Survey and Taxonomy [0.6435984242701043]
We present a taxonomy and comprehensive survey of the state of the art of active divergence techniques.
We highlight the potential for computational creativity researchers to advance these methods and use deep generative models in truly creative systems.
arXiv Detail & Related papers (2021-07-12T17:29:28Z) - Towards creativity characterization of generative models via group-based
subset scanning [51.84144826134919]
We propose group-based subset scanning to quantify, detect, and characterize creative processes.
Creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets.
arXiv Detail & Related papers (2021-04-01T14:07:49Z) - Reverse Engineering Configurations of Neural Text Generation Models [86.9479386959155]
The study of artifacts that emerge in machine generated text as a result of modeling choices is a nascent research area.
We conduct an extensive suite of diagnostic tests to observe whether modeling choices leave detectable artifacts in the text they generate.
Our key finding, which is backed by a rigorous set of experiments, is that such artifacts are present and that different modeling choices can be inferred by observing the generated text alone.
arXiv Detail & Related papers (2020-04-13T21:02:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.