Rewriting a Deep Generative Model
- URL: http://arxiv.org/abs/2007.15646v1
- Date: Thu, 30 Jul 2020 17:58:16 GMT
- Title: Rewriting a Deep Generative Model
- Authors: David Bau, Steven Liu, Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba
- Abstract summary: We introduce a new problem setting: manipulation of specific rules encoded by a deep generative model.
We propose a formulation in which the desired rule is changed by manipulating a layer of a deep network as a linear associative memory.
We present a user interface to enable users to interactively change the rules of a generative model to achieve desired effects.
- Score: 56.91974064348137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A deep generative model such as a GAN learns to model a rich set of semantic
and physical rules about the target distribution, but up to now, it has been
obscure how such rules are encoded in the network, or how a rule could be
changed. In this paper, we introduce a new problem setting: manipulation of
specific rules encoded by a deep generative model. To address the problem, we
propose a formulation in which the desired rule is changed by manipulating a
layer of a deep network as a linear associative memory. We derive an algorithm
for modifying one entry of the associative memory, and we demonstrate that
several interesting structural rules can be located and modified within the
layers of state-of-the-art generative models. We present a user interface to
enable users to interactively change the rules of a generative model to achieve
desired effects, and we show several proof-of-concept applications. Finally,
results on multiple datasets demonstrate the advantage of our method against
standard fine-tuning methods and edit transfer algorithms.
Related papers
- A Scalable Matrix Visualization for Understanding Tree Ensemble Classifiers [20.416696003269674]
This paper introduces a scalable visual analysis method to explain tree ensemble classifiers that contain tens of thousands of rules.
We develop an anomaly-biased model reduction method to prioritize these rules at each hierarchical level.
Our method fosters a deeper understanding of both common and anomalous rules, thereby enhancing interpretability without sacrificing comprehensiveness.
arXiv Detail & Related papers (2024-09-05T01:48:11Z) - From pixels to planning: scale-free active inference [42.04471916762639]
This paper describes a discrete state-space model -- and accompanying methods -- for generative modelling.
We consider deep or hierarchical forms using the renormalisation group.
This technical note illustrates the automatic discovery, learning and deployment of RGMs using a series of applications.
arXiv Detail & Related papers (2024-07-27T14:20:48Z) - Rewriting Geometric Rules of a GAN [32.22250082294461]
Current machine learning approaches miss a key element of the creative process -- the ability to synthesize things that go far beyond the data distribution and everyday experience.
We enable a user to "warp" a given model by editing just a handful of original model outputs with desired geometric changes.
Our method allows a user to create a model that synthesizes endless objects with defined geometric changes, enabling the creation of a new generative model without the burden of curating a large-scale dataset.
arXiv Detail & Related papers (2022-07-28T17:59:36Z) - FROTE: Feedback Rule-Driven Oversampling for Editing Models [14.112993602274457]
We focus on user-provided feedback rules as a way to expedite the ML models update process.
We introduce the problem of pre-processing training data to edit an ML model in response to feedback rules.
To solve this problem, we propose a novel data augmentation method, the Feedback Rule-Based Oversampling Technique.
arXiv Detail & Related papers (2022-01-04T10:16:13Z) - Editing a classifier by rewriting its prediction rules [133.5026383860842]
We present a methodology for modifying the behavior of a classifier by directly rewriting its prediction rules.
Our approach requires virtually no additional data collection and can be applied to a variety of settings, including adapting a model to new environments.
arXiv Detail & Related papers (2021-12-02T06:40:37Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z) - Posterior Control of Blackbox Generation [126.33511630879713]
We consider augmenting neural generation models with discrete control states learned through a structured latent-variable approach.
We find that this method improves over standard benchmarks, while also providing fine-grained control.
arXiv Detail & Related papers (2020-05-10T03:22:45Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z) - Generation of Consistent Sets of Multi-Label Classification Rules with a
Multi-Objective Evolutionary Algorithm [11.25469393912791]
We propose a multi-objective evolutionary algorithm that generates multiple rule-based multi-label classification models.
Our algorithm generates models based on sets (unordered collections) of rules, increasing interpretability.
Also, by employing a conflict avoidance algorithm during the rule-creation, every rule within a given model is guaranteed to be consistent with every other rule in the same model.
arXiv Detail & Related papers (2020-03-27T16:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.