Representative Feature Extraction During Diffusion Process for Sketch
Extraction with One Example
- URL: http://arxiv.org/abs/2401.04362v1
- Date: Tue, 9 Jan 2024 05:22:15 GMT
- Title: Representative Feature Extraction During Diffusion Process for Sketch
Extraction with One Example
- Authors: Kwan Yun, Youngseo Kim, Kwanggyoon Seo, Chang Wook Seo, Junyong Noh
- Abstract summary: We introduce DiffSketch, a method for generating a variety of stylized sketches from images.
Our approach focuses on selecting representative features from the rich semantics of deep features within a pretrained diffusion model.
- Score: 6.520083224801834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce DiffSketch, a method for generating a variety of stylized
sketches from images. Our approach focuses on selecting representative features
from the rich semantics of deep features within a pretrained diffusion model.
This novel sketch generation method can be trained with one manual drawing.
Furthermore, efficient sketch extraction is ensured by distilling a trained
generator into a streamlined extractor. We select denoising diffusion features
through analysis and integrate these selected features with VAE features to
produce sketches. Additionally, we propose a sampling scheme for training
models using a conditional generative approach. Through a series of
comparisons, we verify that distilled DiffSketch not only outperforms existing
state-of-the-art sketch extraction methods but also surpasses diffusion-based
stylization methods in the task of extracting sketches.
Related papers
- Training-free Diffusion Model Alignment with Sampling Demons [15.400553977713914]
We propose an optimization approach, dubbed Demon, to guide the denoising process at inference time without backpropagation through reward functions or model retraining.
Our approach works by controlling noise distribution in denoising steps to concentrate density on regions corresponding to high rewards through optimization.
To the best of our knowledge, the proposed approach is the first inference-time, backpropagation-free preference alignment method for diffusion models.
arXiv Detail & Related papers (2024-10-08T07:33:49Z) - Semi-supervised reference-based sketch extraction using a contrastive learning framework [6.20476217797034]
We propose a novel multi-modal sketch extraction method that can imitate the style of a given reference sketch with unpaired data training.
Our method outperforms state-of-the-art sketch extraction methods and unpaired image translation methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2024-07-19T04:51:34Z) - Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion [61.03681839276652]
Diffusion Forcing is a new training paradigm where a diffusion model is trained to denoise a set of tokens with independent per-token noise levels.
We apply Diffusion Forcing to sequence generative modeling by training a causal next-token prediction model to generate one or several future tokens.
arXiv Detail & Related papers (2024-07-01T15:43:25Z) - Rethinking Score Distillation as a Bridge Between Image Distributions [97.27476302077545]
We show that our method seeks to transport corrupted images (source) to the natural image distribution (target)
Our method can be easily applied across many domains, matching or beating the performance of specialized methods.
We demonstrate its utility in text-to-2D, text-based NeRF optimization, translating paintings to real images, optical illusion generation, and 3D sketch-to-real.
arXiv Detail & Related papers (2024-06-13T17:59:58Z) - Multistep Distillation of Diffusion Models via Moment Matching [29.235113968156433]
We present a new method for making diffusion models faster to sample.
The method distills many-step diffusion models into few-step models by matching conditional expectations of the clean data.
We obtain new state-of-the-art results on the Imagenet dataset.
arXiv Detail & Related papers (2024-06-06T14:20:21Z) - Plug-and-Play Diffusion Distillation [14.359953671470242]
We propose a new distillation approach for guided diffusion models.
An external lightweight guide model is trained while the original text-to-image model remains frozen.
We show that our method reduces the inference of classifier-free guided latent-space diffusion models by almost half.
arXiv Detail & Related papers (2024-06-04T04:22:47Z) - ReNoise: Real Image Inversion Through Iterative Noising [62.96073631599749]
We introduce an inversion method with a high quality-to-operation ratio, enhancing reconstruction accuracy without increasing the number of operations.
We evaluate the performance of our ReNoise technique using various sampling algorithms and models, including recent accelerated diffusion models.
arXiv Detail & Related papers (2024-03-21T17:52:08Z) - DiffSketcher: Text Guided Vector Sketch Synthesis through Latent
Diffusion Models [33.6615688030998]
DiffSketcher is an innovative algorithm that creates textitvectorized free-hand sketches using natural language input.
Our experiments show that DiffSketcher achieves greater quality than prior work.
arXiv Detail & Related papers (2023-06-26T13:30:38Z) - Sketch-Guided Text-to-Image Diffusion Models [57.12095262189362]
We introduce a universal approach to guide a pretrained text-to-image diffusion model.
Our method does not require to train a dedicated model or a specialized encoder for the task.
We take a particular focus on the sketch-to-image translation task, revealing a robust and expressive way to generate images.
arXiv Detail & Related papers (2022-11-24T18:45:32Z) - DWDN: Deep Wiener Deconvolution Network for Non-Blind Image Deblurring [66.91879314310842]
We propose an explicit deconvolution process in a feature space by integrating a classical Wiener deconvolution framework with learned deep features.
A multi-scale cascaded feature refinement module then predicts the deblurred image from the deconvolved deep features.
We show that the proposed deep Wiener deconvolution network facilitates deblurred results with visibly fewer artifacts and quantitatively outperforms state-of-the-art non-blind image deblurring methods by a wide margin.
arXiv Detail & Related papers (2021-03-18T00:38:11Z) - Distributed Sketching Methods for Privacy Preserving Regression [54.51566432934556]
We leverage randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems.
We derive novel approximation guarantees for classical sketching methods and analyze the accuracy of parameter averaging for distributed sketches.
We illustrate the performance of distributed sketches in a serverless computing platform with large scale experiments.
arXiv Detail & Related papers (2020-02-16T08:35:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.