Global Image Sentiment Transfer
- URL: http://arxiv.org/abs/2006.11989v1
- Date: Mon, 22 Jun 2020 03:22:25 GMT
- Title: Global Image Sentiment Transfer
- Authors: Jie An, Tianlang Chen, Songyang Zhang, and Jiebo Luo
- Abstract summary: The proposed framework consists of a reference image retrieval step and a global sentiment transfer step.
The retrieved reference images are more content-related against the algorithm based on the perceptual loss.
The proposed sentiment transfer algorithm can transfer the sentiment of images while ensuring the content structure of the input image intact.
- Score: 90.26415735432576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transferring the sentiment of an image is an unexplored research topic in the
area of computer vision. This work proposes a novel framework consisting of a
reference image retrieval step and a global sentiment transfer step to transfer
sentiments of images according to a given sentiment tag. The proposed image
retrieval algorithm is based on the SSIM index. The retrieved reference images
by the proposed algorithm are more content-related against the algorithm based
on the perceptual loss. Therefore can lead to a better image sentiment transfer
result. In addition, we propose a global sentiment transfer step, which employs
an optimization algorithm to iteratively transfer sentiment of images based on
feature maps produced by the Densenet121 architecture. The proposed sentiment
transfer algorithm can transfer the sentiment of images while ensuring the
content structure of the input image intact. The qualitative and quantitative
experiments demonstrate that the proposed sentiment transfer framework
outperforms existing artistic and photorealistic style transfer algorithms in
making reliable sentiment transfer results with rich and exact details.
Related papers
- Perceptual Image Compression with Cooperative Cross-Modal Side
Information [53.356714177243745]
We propose a novel deep image compression method with text-guided side information to achieve a better rate-perception-distortion tradeoff.
Specifically, we employ the CLIP text encoder and an effective Semantic-Spatial Aware block to fuse the text and image features.
arXiv Detail & Related papers (2023-11-23T08:31:11Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Towards Semantic Communications: Deep Learning-Based Image Semantic
Coding [42.453963827153856]
We conceive the semantic communications for image data that is much more richer in semantics and bandwidth sensitive.
We propose an reinforcement learning based adaptive semantic coding (RL-ASC) approach that encodes images beyond pixel level.
Experimental results demonstrate that the proposed RL-ASC is noise robust and could reconstruct visually pleasant and semantic consistent image.
arXiv Detail & Related papers (2022-08-08T12:29:55Z) - Marginal Contrastive Correspondence for Guided Image Generation [58.0605433671196]
Exemplar-based image translation establishes dense correspondences between a conditional input and an exemplar from two different domains.
Existing work builds the cross-domain correspondences implicitly by minimizing feature-wise distances across the two domains.
We design a Marginal Contrastive Learning Network (MCL-Net) that explores contrastive learning to learn domain-invariant features for realistic exemplar-based image translation.
arXiv Detail & Related papers (2022-04-01T13:55:44Z) - Spatial Content Alignment For Pose Transfer [13.018067816407923]
We propose a novel framework to enhance the content consistency of garment textures and the details of human characteristics.
We first alleviate the spatial misalignment by transferring the edge content to the target pose in advance.
Secondly, we introduce a new Content-Style DeBlk which can progressively synthesize photo-realistic person images.
arXiv Detail & Related papers (2021-03-31T06:10:29Z) - Retrieval Guided Unsupervised Multi-domain Image-to-Image Translation [59.73535607392732]
Image to image translation aims to learn a mapping that transforms an image from one visual domain to another.
We propose the use of an image retrieval system to assist the image-to-image translation task.
arXiv Detail & Related papers (2020-08-11T20:11:53Z) - Image Sentiment Transfer [84.91653085312277]
We introduce an important but still unexplored research task -- image sentiment transfer.
We propose an effective and flexible framework that performs image sentiment transfer at the object level.
For the core object-level sentiment transfer, we propose a novel Sentiment-aware GAN (SentiGAN)
arXiv Detail & Related papers (2020-06-19T19:28:08Z) - Learning Transformation-Aware Embeddings for Image Forensics [15.484408315588569]
Image Provenance Analysis aims at discovering relationships among different manipulated image versions that share content.
One of the main sub-problems for provenance analysis that has not yet been addressed directly is the edit ordering of images that share full content or are near-duplicates.
This paper introduces a novel deep learning-based approach to provide a plausible ordering to images that have been generated from a single image through transformations.
arXiv Detail & Related papers (2020-01-13T22:01:24Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.