Active Divergence with Generative Deep Learning -- A Survey and Taxonomy
- URL: http://arxiv.org/abs/2107.05599v1
- Date: Mon, 12 Jul 2021 17:29:28 GMT
- Title: Active Divergence with Generative Deep Learning -- A Survey and Taxonomy
- Authors: Terence Broad, Sebastian Berns, Simon Colton, Mick Grierson
- Abstract summary: We present a taxonomy and comprehensive survey of the state of the art of active divergence techniques.
We highlight the potential for computational creativity researchers to advance these methods and use deep generative models in truly creative systems.
- Score: 0.6435984242701043
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Generative deep learning systems offer powerful tools for artefact
generation, given their ability to model distributions of data and generate
high-fidelity results. In the context of computational creativity, however, a
major shortcoming is that they are unable to explicitly diverge from the
training data in creative ways and are limited to fitting the target data
distribution. To address these limitations, there have been a growing number of
approaches for optimising, hacking and rewriting these models in order to
actively diverge from the training data. We present a taxonomy and
comprehensive survey of the state of the art of active divergence techniques,
highlighting the potential for computational creativity researchers to advance
these methods and use deep generative models in truly creative systems.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Transfer Learning with Foundational Models for Time Series Forecasting using Low-Rank Adaptations [0.0]
This study proposes LLIAM, the Llama Lora-Integrated Autorregresive Model.
Low-Rank Adaptations are used to enhance the knowledge of the model with diverse time series datasets, known as the fine-tuning phase.
arXiv Detail & Related papers (2024-10-15T12:14:01Z) - SIaM: Self-Improving Code-Assisted Mathematical Reasoning of Large Language Models [54.78329741186446]
We propose a novel paradigm that uses a code-based critic model to guide steps including question-code data construction, quality control, and complementary evaluation.
Experiments across both in-domain and out-of-domain benchmarks in English and Chinese demonstrate the effectiveness of the proposed paradigm.
arXiv Detail & Related papers (2024-08-28T06:33:03Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Generative Learning of Continuous Data by Tensor Networks [45.49160369119449]
We introduce a new family of tensor network generative models for continuous data.
We benchmark the performance of this model on several synthetic and real-world datasets.
Our methods give important theoretical and empirical evidence of the efficacy of quantum-inspired methods for the rapidly growing field of generative learning.
arXiv Detail & Related papers (2023-10-31T14:37:37Z) - Reinforcement Learning for Generative AI: A Survey [40.21640713844257]
This survey aims to shed light on a high-level review that spans a range of application areas.
We provide a rigorous taxonomy in this area and make sufficient coverage on various models and applications.
We conclude this survey by showing the potential directions that might tackle the limit of current models and expand the frontiers for generative AI.
arXiv Detail & Related papers (2023-08-28T06:15:14Z) - Creative divergent synthesis with generative models [3.655021726150369]
Machine learning approaches now achieve impressive generation capabilities in numerous domains such as image, audio or video.
We propose various perspectives on how this complicated goal could ever be achieved, and provide preliminary results on our novel training objective called textitBounded Adversarial Divergence (BAD)
arXiv Detail & Related papers (2022-11-16T12:12:31Z) - Towards Creativity Characterization of Generative Models via Group-based
Subset Scanning [64.6217849133164]
We propose group-based subset scanning to identify, quantify, and characterize creative processes.
We find that creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets.
arXiv Detail & Related papers (2022-03-01T15:07:14Z) - Towards creativity characterization of generative models via group-based
subset scanning [51.84144826134919]
We propose group-based subset scanning to quantify, detect, and characterize creative processes.
Creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets.
arXiv Detail & Related papers (2021-04-01T14:07:49Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.