Operational Latent Spaces
- URL: http://arxiv.org/abs/2406.02699v1
- Date: Tue, 4 Jun 2024 18:25:15 GMT
- Title: Operational Latent Spaces
- Authors: Scott H. Hawley, Austin R. Tackett,
- Abstract summary: We investigate the construction of latent spaces through self-supervised learning to support semantically meaningful operations.
Some operational latent spaces are found to have arisen "unintentionally" in the progress toward some self-supervised learning objective.
We focus on the intentional creation of operational latent spaces via self-supervised learning, including the introduction of rotation operators via a novel "FiLMR" layer.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the construction of latent spaces through self-supervised learning to support semantically meaningful operations. Analogous to operational amplifiers, these "operational latent spaces" (OpLaS) not only demonstrate semantic structure such as clustering but also support common transformational operations with inherent semantic meaning. Some operational latent spaces are found to have arisen "unintentionally" in the progress toward some (other) self-supervised learning objective, in which unintended but still useful properties are discovered among the relationships of points in the space. Other spaces may be constructed "intentionally" by developers stipulating certain kinds of clustering or transformations intended to produce the desired structure. We focus on the intentional creation of operational latent spaces via self-supervised learning, including the introduction of rotation operators via a novel "FiLMR" layer, which can be used to enable ring-like symmetries found in some musical constructions.
Related papers
- Continuum Attention for Neural Operators [6.425471760071227]
We study transformers in the function space setting.
We prove that the attention mechanism as implemented in practice is a Monte Carlo or finite difference approximation of this operator.
For this reason we also introduce a function space generalization of the patching strategy from computer vision, and introduce a class of associated neural operators.
arXiv Detail & Related papers (2024-06-10T17:25:46Z) - Transport of Algebraic Structure to Latent Embeddings [8.693845596949892]
Machine learning often aims to produce latent embeddings of inputs which lie in a larger, abstract mathematical space.
How can we learn to "union" two sets using only their latent embeddings while respecting associativity?
We propose a general procedure for parameterizing latent space operations that are provably consistent with the laws on the input space.
arXiv Detail & Related papers (2024-05-27T02:24:57Z) - Discovering Class-Specific GAN Controls for Semantic Image Synthesis [73.91655061467988]
We propose a novel method for finding spatially disentangled class-specific directions in the latent space of pretrained SIS models.
We show that the latent directions found by our method can effectively control the local appearance of semantic classes.
arXiv Detail & Related papers (2022-12-02T21:39:26Z) - Inferring Versatile Behavior from Demonstrations by Matching Geometric
Descriptors [72.62423312645953]
Humans intuitively solve tasks in versatile ways, varying their behavior in terms of trajectory-based planning and for individual steps.
Current Imitation Learning algorithms often only consider unimodal expert demonstrations and act in a state-action-based setting.
Instead, we combine a mixture of movement primitives with a distribution matching objective to learn versatile behaviors that match the expert's behavior and versatility.
arXiv Detail & Related papers (2022-10-17T16:42:59Z) - SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World
Semantic Scene Understanding [34.19666841489646]
We show how a robot can autonomously discover novel semantic classes and improve accuracy on known classes when exploring an unknown environment.
We develop a general framework for mapping and clustering that we then use to generate a self-supervised learning signal to update a semantic segmentation model.
In particular, we show how clustering parameters can be optimized during deployment and that fusion of multiple observation modalities improves novel object discovery compared to prior work.
arXiv Detail & Related papers (2022-06-21T18:41:51Z) - Structure-Aware Feature Generation for Zero-Shot Learning [108.76968151682621]
We introduce a novel structure-aware feature generation scheme, termed as SA-GAN, to account for the topological structure in learning both the latent space and the generative networks.
Our method significantly enhances the generalization capability on unseen-classes and consequently improve the classification performance.
arXiv Detail & Related papers (2021-08-16T11:52:08Z) - Unsupervised Discriminative Embedding for Sub-Action Learning in Complex
Activities [54.615003524001686]
This paper proposes a novel approach for unsupervised sub-action learning in complex activities.
The proposed method maps both visual and temporal representations to a latent space where the sub-actions are learnt discriminatively.
We show that the proposed combination of visual-temporal embedding and discriminative latent concepts allow to learn robust action representations in an unsupervised setting.
arXiv Detail & Related papers (2021-04-30T20:07:27Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z) - A Novel Perspective to Zero-shot Learning: Towards an Alignment of
Manifold Structures via Semantic Feature Expansion [17.48923061278128]
A common practice in zero-shot learning is to train a projection between the visual and semantic feature spaces with labeled seen classes examples.
Under such a paradigm, most existing methods easily suffer from the domain shift problem and weaken the performance of zero-shot recognition.
We propose a novel model called AMS-SFE that considers the alignment of manifold structures by semantic feature expansion.
arXiv Detail & Related papers (2020-04-30T14:08:10Z) - Trajectory annotation using sequences of spatial perception [0.0]
In the near future, more and more machines will perform tasks in the vicinity of human spaces.
This work builds a foundation to address this task.
We propose an unsupervised learning approach based on a neural autoencoding that learns semantically meaningful continuous encodings of prototypical trajectory data.
arXiv Detail & Related papers (2020-04-11T12:22:27Z) - Weakly-Supervised Reinforcement Learning for Controllable Behavior [126.04932929741538]
Reinforcement learning (RL) is a powerful framework for learning to take actions to solve tasks.
In many settings, an agent must winnow down the inconceivably large space of all possible tasks to the single task that it is currently being asked to solve.
We introduce a framework for using weak supervision to automatically disentangle this semantically meaningful subspace of tasks from the enormous space of nonsensical "chaff" tasks.
arXiv Detail & Related papers (2020-04-06T17:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.