Regularising Inverse Problems with Generative Machine Learning Models
- URL: http://arxiv.org/abs/2107.11191v1
- Date: Thu, 22 Jul 2021 15:47:36 GMT
- Title: Regularising Inverse Problems with Generative Machine Learning Models
- Authors: Margaret Duff, Neill D. F. Campbell, Matthias J. Ehrhardt
- Abstract summary: We consider the use of generative models in a variational regularisation approach to inverse problems.
The success of generative regularisers depends on the quality of the generative model.
We show that the success of solutions restricted to lie exactly in the range of the generator is highly dependent on the ability of the generative model.
- Score: 9.971351129098336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural network approaches to inverse imaging problems have produced
impressive results in the last few years. In this paper, we consider the use of
generative models in a variational regularisation approach to inverse problems.
The considered regularisers penalise images that are far from the range of a
generative model that has learned to produce images similar to a training
dataset. We name this family \textit{generative regularisers}. The success of
generative regularisers depends on the quality of the generative model and so
we propose a set of desired criteria to assess models and guide future
research. In our numerical experiments, we evaluate three common generative
models, autoencoders, variational autoencoders and generative adversarial
networks, against our desired criteria. We also test three different generative
regularisers on the inverse problems of deblurring, deconvolution, and
tomography. We show that the success of solutions restricted to lie exactly in
the range of the generator is highly dependent on the ability of the generative
model but that allowing small deviations from the range of the generator
produces more consistent results.
Related papers
- Fluid: Scaling Autoregressive Text-to-image Generative Models with Continuous Tokens [53.99177152562075]
Scaling up autoregressive models in vision has not proven as beneficial as in large language models.
We focus on two critical factors: whether models use discrete or continuous tokens, and whether tokens are generated in a random or fixed order using BERT- or GPT-like transformer architectures.
Our results show that while all models scale effectively in terms of validation loss, their evaluation performance -- measured by FID, GenEval score, and visual quality -- follows different trends.
arXiv Detail & Related papers (2024-10-17T17:59:59Z) - Avoiding Generative Model Writer's Block With Embedding Nudging [8.3196702956302]
We focus on the latent diffusion image generative models and how one can prevent them to generate particular images while generating similar images with limited overhead.
Our method successfully prevents the generation of memorized training images while maintaining comparable image quality and relevance to the unmodified model.
arXiv Detail & Related papers (2024-08-28T00:07:51Z) - Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations [52.11801730860999]
In recent years, the robot learning community has shown increasing interest in using deep generative models to capture the complexity of large datasets.
We present the different types of models that the community has explored, such as energy-based models, diffusion models, action value maps, or generative adversarial networks.
We also present the different types of applications in which deep generative models have been used, from grasp generation to trajectory generation or cost learning.
arXiv Detail & Related papers (2024-08-08T11:34:31Z) - How to Trace Latent Generative Model Generated Images without Artificial Watermark? [88.04880564539836]
Concerns have arisen regarding potential misuse related to images generated by latent generative models.
We propose a latent inversion based method called LatentTracer to trace the generated images of the inspected model.
Our experiments show that our method can distinguish the images generated by the inspected model and other images with a high accuracy and efficiency.
arXiv Detail & Related papers (2024-05-22T05:33:47Z) - Non-autoregressive Generative Models for Reranking Recommendation [9.854541524740549]
In a recommendation system, reranking plays a crucial role by modeling the intra-list correlations among items.
We propose a Non-AutoRegressive generative model for reranking Recommendation (NAR4Rec) designed to enhance efficiency and effectiveness.
NAR4Rec has been fully deployed in a popular video app Kuaishou with over 300 million daily active users.
arXiv Detail & Related papers (2024-02-10T03:21:13Z) - Generator Born from Classifier [66.56001246096002]
We aim to reconstruct an image generator, without relying on any data samples.
We propose a novel learning paradigm, in which the generator is trained to ensure that the convergence conditions of the network parameters are satisfied.
arXiv Detail & Related papers (2023-12-05T03:41:17Z) - Controlling the Output of a Generative Model by Latent Feature Vector
Shifting [0.0]
We present our novel method for latent vector shifting for controlled output image modification.
In our approach we use a pre-trained model of StyleGAN3 that generates images of realistic human faces.
Our latent feature shifter is a neural network model with a task to shift the latent vectors of a generative model into a specified feature direction.
arXiv Detail & Related papers (2023-11-15T10:42:06Z) - Alteration-free and Model-agnostic Origin Attribution of Generated
Images [28.34437698362946]
Concerns have emerged regarding potential misuse of image generation models.
It is necessary to analyze the origin of images by inferring if a specific image was generated by a particular model.
arXiv Detail & Related papers (2023-05-29T01:35:37Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.