The Immersion of Directed Multi-graphs in Embedding Fields.
Generalisations
- URL: http://arxiv.org/abs/2004.13384v1
- Date: Tue, 28 Apr 2020 09:28:08 GMT
- Title: The Immersion of Directed Multi-graphs in Embedding Fields.
Generalisations
- Authors: Bogdan Bocse and Ioan Radu Jinga
- Abstract summary: This paper outlines a generalised model for representing hybrid-categorical, symbolic, perceptual-sensory and perceptual-latent data.
This variety of representation is currently used by various machine-learning models in computer vision, NLP/NLU.
It is achieved by endowing a directed relational-Typed Multi-Graph with at least some edge attributes which represent the embeddings from various latent spaces.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The purpose of this paper is to outline a generalised model for representing
hybrids of relational-categorical, symbolic, perceptual-sensory and
perceptual-latent data, so as to embody, in the same architectural data layer,
representations for the input, output and latent tensors. This variety of
representation is currently used by various machine-learning models in computer
vision, NLP/NLU, reinforcement learning which allows for direct application of
cross-domain queries and functions. This is achieved by endowing a directed
Tensor-Typed Multi-Graph with at least some edge attributes which represent the
embeddings from various latent spaces, so as to define, construct and compute
new similarity and distance relationships between and across tensorial forms,
including visual, linguistic, auditory latent representations, thus stitching
the logical-categorical view of the observed universe to the
Bayesian/statistical view.
Related papers
- Graph-Dictionary Signal Model for Sparse Representations of Multivariate Data [49.77103348208835]
We define a novel Graph-Dictionary signal model, where a finite set of graphs characterizes relationships in data distribution through a weighted sum of their Laplacians.
We propose a framework to infer the graph dictionary representation from observed data, along with a bilinear generalization of the primal-dual splitting algorithm to solve the learning problem.
We exploit graph-dictionary representations in a motor imagery decoding task on brain activity data, where we classify imagined motion better than standard methods.
arXiv Detail & Related papers (2024-11-08T17:40:43Z) - Latent Functional Maps: a spectral framework for representation alignment [34.20582953800544]
We introduce a multi-purpose framework to the representation learning community, which allows to: (i) compare different spaces in an interpretable way and measure their intrinsic similarity; (ii) find correspondences between them, both in unsupervised and weakly supervised settings, and (iii) to effectively transfer representations between distinct spaces.
We validate our framework on various applications, ranging from stitching to retrieval tasks, and on multiple modalities, demonstrating that Latent Functional Maps can serve as a swiss-army knife for representation alignment.
arXiv Detail & Related papers (2024-06-20T10:43:28Z) - Neural Clustering based Visual Representation Learning [61.72646814537163]
Clustering is one of the most classic approaches in machine learning and data analysis.
We propose feature extraction with clustering (FEC), which views feature extraction as a process of selecting representatives from data.
FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives.
arXiv Detail & Related papers (2024-03-26T06:04:50Z) - Experimental Observations of the Topology of Convolutional Neural
Network Activations [2.4235626091331737]
Topological data analysis provides compact, noise-robust representations of complex structures.
Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture.
In this paper, we apply cutting edge techniques from TDA with the goal of gaining insight into the interpretability of convolutional neural networks used for image classification.
arXiv Detail & Related papers (2022-12-01T02:05:44Z) - Metric Distribution to Vector: Constructing Data Representation via
Broad-Scale Discrepancies [15.40538348604094]
We present a novel embedding strategy named $mathbfMetricDistribution2vec$ to extract distribution characteristics into the vectorial representation for each data.
We demonstrate the application and effectiveness of our representation method in the supervised prediction tasks on extensive real-world structural graph datasets.
arXiv Detail & Related papers (2022-10-02T03:18:30Z) - Image Synthesis via Semantic Composition [74.68191130898805]
We present a novel approach to synthesize realistic images based on their semantic layouts.
It hypothesizes that for objects with similar appearance, they share similar representation.
Our method establishes dependencies between regions according to their appearance correlation, yielding both spatially variant and associated representations.
arXiv Detail & Related papers (2021-09-15T02:26:07Z) - Graph Pattern Loss based Diversified Attention Network for Cross-Modal
Retrieval [10.420129873840578]
Cross-modal retrieval aims to enable flexible retrieval experience by combining multimedia data such as image, video, text, and audio.
One core of unsupervised approaches is to dig the correlations among different object representations to complete satisfied retrieval performance without requiring expensive labels.
We propose a Graph Pattern Loss based Diversified Attention Network(GPLDAN) for unsupervised cross-modal retrieval.
arXiv Detail & Related papers (2021-06-25T10:53:07Z) - Cross-Modal Discrete Representation Learning [73.68393416984618]
We present a self-supervised learning framework that learns a representation that captures finer levels of granularity across different modalities.
Our framework relies on a discretized embedding space created via vector quantization that is shared across different modalities.
arXiv Detail & Related papers (2021-06-10T00:23:33Z) - Unified Graph Structured Models for Video Understanding [93.72081456202672]
We propose a message passing graph neural network that explicitly models relational-temporal relations.
We show how our method is able to more effectively model relationships between relevant entities in the scene.
arXiv Detail & Related papers (2021-03-29T14:37:35Z) - Structured (De)composable Representations Trained with Neural Networks [21.198279941828112]
A template representation refers to the generic representation that captures the characteristics of an entire class.
The proposed technique uses end-to-end deep learning to learn structured and composable representations from input images and discrete labels.
We prove that the representations have a clear structure allowing to decompose the representation into factors that represent classes and environments.
arXiv Detail & Related papers (2020-07-07T10:20:31Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.