Taming Normalizing Flows
- URL: http://arxiv.org/abs/2211.16488v2
- Date: Mon, 3 Apr 2023 17:58:21 GMT
- Title: Taming Normalizing Flows
- Authors: Shimon Malnick, Shai Avidan, Ohad Fried
- Abstract summary: We propose an algorithm for taming Normalizing Flow models.
We focus on Normalizing Flows because they can calculate the exact generation probability likelihood for a given image.
Taming is achieved with a fast fine-tuning process without retraining the model from scratch, achieving the goal in a matter of minutes.
- Score: 22.15640952962115
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose an algorithm for taming Normalizing Flow models - changing the
probability that the model will produce a specific image or image category. We
focus on Normalizing Flows because they can calculate the exact generation
probability likelihood for a given image. We demonstrate taming using models
that generate human faces, a subdomain with many interesting privacy and bias
considerations. Our method can be used in the context of privacy, e.g.,
removing a specific person from the output of a model, and also in the context
of debiasing by forcing a model to output specific image categories according
to a given target distribution. Taming is achieved with a fast fine-tuning
process without retraining the model from scratch, achieving the goal in a
matter of minutes. We evaluate our method qualitatively and quantitatively,
showing that the generation quality remains intact, while the desired changes
are applied.
Related papers
- Model Integrity when Unlearning with T2I Diffusion Models [11.321968363411145]
We propose approximate Machine Unlearning algorithms to reduce the generation of specific types of images, characterized by samples from a forget distribution''
We then propose unlearning algorithms that demonstrate superior effectiveness in preserving model integrity compared to existing baselines.
arXiv Detail & Related papers (2024-11-04T13:15:28Z) - Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - How to Trace Latent Generative Model Generated Images without Artificial Watermark? [88.04880564539836]
Concerns have arisen regarding potential misuse related to images generated by latent generative models.
We propose a latent inversion based method called LatentTracer to trace the generated images of the inspected model.
Our experiments show that our method can distinguish the images generated by the inspected model and other images with a high accuracy and efficiency.
arXiv Detail & Related papers (2024-05-22T05:33:47Z) - Alteration-free and Model-agnostic Origin Attribution of Generated
Images [28.34437698362946]
Concerns have emerged regarding potential misuse of image generation models.
It is necessary to analyze the origin of images by inferring if a specific image was generated by a particular model.
arXiv Detail & Related papers (2023-05-29T01:35:37Z) - Decision-based iterative fragile watermarking for model integrity
verification [33.42076236847454]
Foundation models are typically hosted on cloud servers to meet the high demand for their services.
This exposes them to security risks, as attackers can modify them after uploading to the cloud or transferring from a local system.
We propose an iterative decision-based fragile watermarking algorithm that transforms normal training samples into fragile samples that are sensitive to model changes.
arXiv Detail & Related papers (2023-05-13T10:36:11Z) - Uncovering the Disentanglement Capability in Text-to-Image Diffusion
Models [60.63556257324894]
A key desired property of image generative models is the ability to disentangle different attributes.
We propose a simple, light-weight image editing algorithm where the mixing weights of the two text embeddings are optimized for style matching and content preservation.
Experiments show that the proposed method can modify a wide range of attributes, with the performance outperforming diffusion-model-based image-editing algorithms.
arXiv Detail & Related papers (2022-12-16T19:58:52Z) - Probabilistic Modeling for Human Mesh Recovery [73.11532990173441]
This paper focuses on the problem of 3D human reconstruction from 2D evidence.
We recast the problem as learning a mapping from the input to a distribution of plausible 3D poses.
arXiv Detail & Related papers (2021-08-26T17:55:11Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - One-Shot Domain Adaptation For Face Generation [34.882820002799626]
We propose a framework capable of generating face images that fall into the same distribution as that of a given one-shot example.
We develop an iterative optimization scheme that rapidly adapts the weights of the model to shift the output's high-level distribution to the target's.
To generate images of the same distribution, we introduce a style-mixing technique that transfers the low-level statistics from the target to faces randomly generated with the model.
arXiv Detail & Related papers (2020-03-28T18:50:13Z) - Regularized Autoencoders via Relaxed Injective Probability Flow [35.39933775720789]
Invertible flow-based generative models are an effective method for learning to generate samples, while allowing for tractable likelihood computation and inference.
We propose a generative model based on probability flows that does away with the bijectivity requirement on the model and only assumes injectivity.
arXiv Detail & Related papers (2020-02-20T18:22:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.