Embedding Vector Differences Can Be Aligned With Uncertain Intensional
Logic Differences
- URL: http://arxiv.org/abs/2005.12535v1
- Date: Tue, 26 May 2020 06:20:32 GMT
- Title: Embedding Vector Differences Can Be Aligned With Uncertain Intensional
Logic Differences
- Authors: Ben Goertzel, Mike Duncan, Debbie Duong, Nil Geisweiller, Hedra Seid,
Abdulrahman Semrie, Man Hin Leung, Matthew Ikle'
- Abstract summary: DeepWalk algorithm is used to assign embedding vectors to nodes in the Atomspace, labeled hypergraph that is used to represent knowledge in the OpenCog AGI system.
It is shown that a vector difference operations between embedding vectors are, in appropriate alignable with "intensional difference" operations between the hypergraph nodes corresponding to the embedding vectors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The DeepWalk algorithm is used to assign embedding vectors to nodes in the
Atomspace weighted, labeled hypergraph that is used to represent knowledge in
the OpenCog AGI system, in the context of an application to probabilistic
inference regarding the causes of longevity based on data from biological
ontologies and genomic analyses. It is shown that vector difference operations
between embedding vectors are, in appropriate conditions, approximately
alignable with "intensional difference" operations between the hypergraph nodes
corresponding to the embedding vectors. This relationship hints at a broader
functorial mapping between uncertain intensional logic and vector arithmetic,
and opens the door for using embedding vector algebra to guide intensional
inference control.
Related papers
- An Ad-hoc graph node vector embedding algorithm for general knowledge graphs using Kinetica-Graph [0.0]
This paper discusses how to generate general graph node embeddings from knowledge graph representations.
The embedded space is composed of a number of sub-features to mimic both local affinity and remote structural relevance.
arXiv Detail & Related papers (2024-07-22T14:43:10Z) - Knowledge Composition using Task Vectors with Learned Anisotropic Scaling [51.4661186662329]
We introduce aTLAS, an algorithm that linearly combines parameter blocks with different learned coefficients, resulting in anisotropic scaling at the task vector level.
We show that such linear combinations explicitly exploit the low intrinsicity of pre-trained models, with only a few coefficients being the learnable parameters.
We demonstrate the effectiveness of our method in task arithmetic, few-shot recognition and test-time adaptation, with supervised or unsupervised objectives.
arXiv Detail & Related papers (2024-07-03T07:54:08Z) - An Intrinsic Vector Heat Network [64.55434397799728]
This paper introduces a novel neural network architecture for learning tangent vector fields embedded in 3D.
We introduce a trainable vector heat diffusion module to spatially propagate vector-valued feature data across the surface.
We also demonstrate the effectiveness of our method on the useful industrial application of quadrilateral mesh generation.
arXiv Detail & Related papers (2024-06-14T00:40:31Z) - Deciphering 'What' and 'Where' Visual Pathways from Spectral Clustering of Layer-Distributed Neural Representations [15.59251297818324]
We present an approach for analyzing grouping information contained within a neural network's activations.
We exploit features from all layers and obviating the need to guess which part of the model contains relevant information.
arXiv Detail & Related papers (2023-12-11T01:20:34Z) - Neural Tangent Kernels Motivate Graph Neural Networks with
Cross-Covariance Graphs [94.44374472696272]
We investigate NTKs and alignment in the context of graph neural networks (GNNs)
Our results establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN.
These guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data.
arXiv Detail & Related papers (2023-10-16T19:54:21Z) - On the Validation of Gibbs Algorithms: Training Datasets, Test Datasets
and their Aggregation [70.540936204654]
dependence on training data of the Gibbs algorithm (GA) is analytically characterized.
This description enables the development of explicit expressions involving the training errors and test errors of GAs trained with different datasets.
arXiv Detail & Related papers (2023-06-21T16:51:50Z) - An Algorithm for Routing Vectors in Sequences [0.0]
We propose a routing algorithm that takes a sequence of vectors and computes a new sequence with specified length and vector size.
Each output vector maximizes "bang per bit," the difference between a net benefit to use and net cost to ignore data, by better predicting the input vectors.
arXiv Detail & Related papers (2022-11-20T16:20:45Z) - Leveraging Spatial and Temporal Correlations in Sparsified Mean
Estimation [11.602121447683597]
We study the problem of estimating at a central server the mean of a set of vectors distributed across several nodes.
We leverage these correlations by simply modifying the decoding method used by the server to estimate the mean.
We provide an analysis of the resulting estimation error as well as experiments for PCA, K-Means and Logistic Regression.
arXiv Detail & Related papers (2021-10-14T22:24:26Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z) - Variable Binding for Sparse Distributed Representations: Theory and
Applications [4.150085009901543]
Symbolic reasoning and neural networks are often considered incompatible approaches. Connectionist models known as Vector Symbolic Architectures (VSAs) can potentially bridge this gap.
VSAs encode symbols by dense pseudo-random vectors, where information is distributed throughout the entire neuron population.
We show that variable binding between dense vectors in VSAs is mathematically equivalent to tensor product binding between sparse vectors, an operation which increases dimensionality.
arXiv Detail & Related papers (2020-09-14T20:40:09Z) - Analyzing Upper Bounds on Mean Absolute Errors for Deep Neural Network
Based Vector-to-Vector Regression [79.86233860519621]
We show that, in vector-to-vector regression utilizing deep neural networks (DNNs), a generalized loss of error (MAE) is between a mean absolute error and an expected feature error.
We propose a proposed upper bounds of MAE for DNN based vector-to-vector regression.
arXiv Detail & Related papers (2020-08-04T19:39:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.