Robust Sound-Guided Image Manipulation
- URL: http://arxiv.org/abs/2208.14114v3
- Date: Tue, 25 Apr 2023 01:31:20 GMT
- Title: Robust Sound-Guided Image Manipulation
- Authors: Seung Hyun Lee, Gyeongrok Oh, Wonmin Byeon, Sang Ho Yoon, Jinkyu Kim,
Sangpil Kim
- Abstract summary: We propose a novel approach that first extends the image-text joint embedding space with sound.
Our experiments show that our sound-guided image manipulation approach produces semantically and visually more plausible manipulation results.
- Score: 17.672008998994816
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent successes suggest that an image can be manipulated by a text prompt,
e.g., a landscape scene on a sunny day is manipulated into the same scene on a
rainy day driven by a text input "raining". These approaches often utilize a
StyleCLIP-based image generator, which leverages multi-modal (text and image)
embedding space. However, we observe that such text inputs are often
bottlenecked in providing and synthesizing rich semantic cues, e.g.,
differentiating heavy rain from rain with thunderstorms. To address this issue,
we advocate leveraging an additional modality, sound, which has notable
advantages in image manipulation as it can convey more diverse semantic cues
(vivid emotions or dynamic expressions of the natural world) than texts. In
this paper, we propose a novel approach that first extends the image-text joint
embedding space with sound and applies a direct latent optimization method to
manipulate a given image based on audio input, e.g., the sound of rain. Our
extensive experiments show that our sound-guided image manipulation approach
produces semantically and visually more plausible manipulation results than the
state-of-the-art text and sound-guided image manipulation methods, which are
further confirmed by our human evaluations. Our downstream task evaluations
also show that our learned image-text-sound joint embedding space effectively
encodes sound inputs.
Related papers
- Seek for Incantations: Towards Accurate Text-to-Image Diffusion
Synthesis through Prompt Engineering [118.53208190209517]
We propose a framework to learn the proper textual descriptions for diffusion models through prompt learning.
Our method can effectively learn the prompts to improve the matches between the input text and the generated images.
arXiv Detail & Related papers (2024-01-12T03:46:29Z) - Enhancing Scene Text Detectors with Realistic Text Image Synthesis Using
Diffusion Models [63.99110667987318]
We present DiffText, a pipeline that seamlessly blends foreground text with the background's intrinsic features.
With fewer text instances, our produced text images consistently surpass other synthetic data in aiding text detectors.
arXiv Detail & Related papers (2023-11-28T06:51:28Z) - Align, Adapt and Inject: Sound-guided Unified Image Generation [50.34667929051005]
We propose a unified framework 'Align, Adapt, and Inject' (AAI) for sound-guided image generation, editing, and stylization.
Our method adapts input sound into a sound token, like an ordinary word, which can plug and play with existing Text-to-Image (T2I) models.
Our proposed AAI outperforms other text and sound-guided state-of-the-art methods.
arXiv Detail & Related papers (2023-06-20T12:50:49Z) - Unified Multi-Modal Latent Diffusion for Joint Subject and Text
Conditional Image Generation [63.061871048769596]
We present a novel Unified Multi-Modal Latent Diffusion (UMM-Diffusion) which takes joint texts and images containing specified subjects as input sequences.
To be more specific, both input texts and images are encoded into one unified multi-modal latent space.
Our method is able to generate high-quality images with complex semantics from both aspects of input texts and images.
arXiv Detail & Related papers (2023-03-16T13:50:20Z) - Robot Synesthesia: A Sound and Emotion Guided AI Painter [13.2441524021269]
We propose an approach for using sound and speech to guide a robotic painting process, known here as robot synesthesia.
For general sound, we encode the simulated paintings and input sounds into the same latent space. For speech, we decouple speech into its transcribed text and the tone of the speech. Whereas we use the text to control the content, we estimate the emotions from the tone to guide the mood of the painting.
arXiv Detail & Related papers (2023-02-09T18:53:44Z) - CLIP-PAE: Projection-Augmentation Embedding to Extract Relevant Features for a Disentangled, Interpretable, and Controllable Text-Guided Face Manipulation [4.078926358349661]
Contrastive Language-Image Pre-Training (CLIP) bridges images and text by embedding them into a joint latent space.
Due to the discrepancy between image and text embeddings in the joint space, using text embeddings as the optimization target often introduces undesired artifacts in the resulting images.
We introduce CLIP Projection-Augmentation Embedding (PAE) as an optimization target to improve the performance of text-guided image manipulation.
arXiv Detail & Related papers (2022-10-08T05:12:25Z) - Sound-Guided Semantic Image Manipulation [19.01823634838526]
We propose a framework that directly encodes sound into the multi-modal (image-text) embedding space and manipulates an image from the space.
Our method can mix different modalities, i.e., text and audio, which enrich the variety of the image modification.
The experiments on zero-shot audio classification and semantic-level image classification show that our proposed model outperforms other text and sound-guided state-of-the-art methods.
arXiv Detail & Related papers (2021-11-30T13:30:12Z) - StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery [71.1862388442953]
We develop a text-based interface for StyleGAN image manipulation.
We first introduce an optimization scheme that utilizes a CLIP-based loss to modify an input latent vector in response to a user-provided text prompt.
Next, we describe a latent mapper that infers a text-guided latent manipulation step for a given input image, allowing faster and more stable text-based manipulation.
arXiv Detail & Related papers (2021-03-31T17:51:25Z) - Open-Edit: Open-Domain Image Manipulation with Open-Vocabulary
Instructions [66.82547612097194]
We propose a novel algorithm, named Open-Edit, which is the first attempt on open-domain image manipulation with open-vocabulary instructions.
Our approach takes advantage of the unified visual-semantic embedding space pretrained on a general image-caption dataset.
We show promising results in manipulating open-vocabulary color, texture, and high-level attributes for various scenarios of open-domain images.
arXiv Detail & Related papers (2020-08-04T14:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.