Emotion Transfer Using Vector-Valued Infinite Task Learning
- URL: http://arxiv.org/abs/2102.05075v1
- Date: Tue, 9 Feb 2021 19:05:56 GMT
- Title: Emotion Transfer Using Vector-Valued Infinite Task Learning
- Authors: Alex Lambert, Sanjeel Parekh, Zolt\'an Szab\'o, Florence d'Alch\'e-Buc
- Abstract summary: We present a novel style transfer framework building upon infinite task learning and vector-valued reproducing kernel Hilbert spaces.
We instantiate the idea in emotion transfer where the goal is to transform facial images to different target emotions.
- Score: 2.588412672658578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Style transfer is a significant problem of machine learning with numerous
successful applications. In this work, we present a novel style transfer
framework building upon infinite task learning and vector-valued reproducing
kernel Hilbert spaces. We instantiate the idea in emotion transfer where the
goal is to transform facial images to different target emotions. The proposed
approach provides a principled way to gain explicit control over the continuous
style space. We demonstrate the efficiency of the technique on popular facial
emotion benchmarks, achieving low reconstruction cost and high emotion
classification accuracy.
Related papers
- Re-ENACT: Reinforcement Learning for Emotional Speech Generation using Actor-Critic Strategy [8.527959937101826]
We train a neural network to produce the variational posterior of a collection of Bernoulli random variables.
We modify the prosodic features of a masked segment to increase the score of target emotion.
Our experiments demonstrate that this framework changes the perceived emotion of a given speech utterance to the target.
arXiv Detail & Related papers (2024-08-04T00:47:29Z) - Self-Explainable Affordance Learning with Embodied Caption [63.88435741872204]
We introduce Self-Explainable Affordance learning (SEA) with embodied caption.
SEA enables robots to articulate their intentions and bridge the gap between explainable vision-language caption and visual affordance learning.
We propose a novel model to effectively combine affordance grounding with self-explanation in a simple but efficient manner.
arXiv Detail & Related papers (2024-04-08T15:22:38Z) - AnySkill: Learning Open-Vocabulary Physical Skill for Interactive Agents [58.807802111818994]
We propose AnySkill, a novel hierarchical method that learns physically plausible interactions following open-vocabulary instructions.
Our approach begins by developing a set of atomic actions via a low-level controller trained via imitation learning.
An important feature of our method is the use of image-based rewards for the high-level policy, which allows the agent to learn interactions with objects without manual reward engineering.
arXiv Detail & Related papers (2024-03-19T15:41:39Z) - Weakly-Supervised Emotion Transition Learning for Diverse 3D Co-speech Gesture Generation [43.04371187071256]
We present a novel method to generate vivid and emotional 3D co-speech gestures in 3D avatars.
We use the ChatGPT-4 and an audio inpainting approach to construct the high-fidelity emotion transition human speeches.
Our method outperforms the state-of-the-art models constructed by adapting single emotion-conditioned counterparts.
arXiv Detail & Related papers (2023-11-29T11:10:40Z) - Transductive Learning for Unsupervised Text Style Transfer [60.65782243927698]
Unsupervised style transfer models are mainly based on an inductive learning approach.
We propose a novel transductive learning approach based on a retrieval-based context-aware style representation.
arXiv Detail & Related papers (2021-09-16T08:57:20Z) - Learning to Shift Attention for Motion Generation [55.61994201686024]
One challenge of motion generation using robot learning from demonstration techniques is that human demonstrations follow a distribution with multiple modes for one task query.
Previous approaches fail to capture all modes or tend to average modes of the demonstrations and thus generate invalid trajectories.
We propose a motion generation model with extrapolation ability to overcome this problem.
arXiv Detail & Related papers (2021-02-24T09:07:52Z) - Learning Unseen Emotions from Gestures via Semantically-Conditioned
Zero-Shot Perception with Adversarial Autoencoders [25.774235606472875]
We introduce an adversarial, autoencoder-based representation learning that correlates 3D motion-captured gesture sequence with the vectorized representation of the natural-language perceived emotion terms.
We train our method using a combination of gestures annotated with known emotion terms and gestures not annotated with any emotions.
arXiv Detail & Related papers (2020-09-18T15:59:44Z) - Generative Adversarial Stacked Autoencoders for Facial Pose
Normalization and Emotion Recognition [4.620526905329234]
We propose a Generative Adversarial Stacked Autoencoder that learns to map facial expressions.
We report state-of-the-art performance on several facial emotion recognition corpora, including one collected in the wild.
arXiv Detail & Related papers (2020-07-19T21:47:16Z) - Image-to-image Mapping with Many Domains by Sparse Attribute Transfer [71.28847881318013]
Unsupervised image-to-image translation consists of learning a pair of mappings between two domains without known pairwise correspondences between points.
Current convention is to approach this task with cycle-consistent GANs.
We propose an alternate approach that directly restricts the generator to performing a simple sparse transformation in a latent layer.
arXiv Detail & Related papers (2020-06-23T19:52:23Z) - Meta Transfer Learning for Emotion Recognition [42.61707533351803]
We propose a PathNet-based transfer learning method that is able to transfer emotional knowledge learned from one visual/audio emotion domain to another visual/audio emotion domain.
Our proposed system is capable of improving the performance of emotion recognition, making its performance substantially superior to the recent proposed fine-tuning/pre-trained models based transfer learning methods.
arXiv Detail & Related papers (2020-06-23T00:25:28Z) - Image Sentiment Transfer [84.91653085312277]
We introduce an important but still unexplored research task -- image sentiment transfer.
We propose an effective and flexible framework that performs image sentiment transfer at the object level.
For the core object-level sentiment transfer, we propose a novel Sentiment-aware GAN (SentiGAN)
arXiv Detail & Related papers (2020-06-19T19:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.