Computing on Functions Using Randomized Vector Representations
- URL: http://arxiv.org/abs/2109.03429v1
- Date: Wed, 8 Sep 2021 04:39:48 GMT
- Title: Computing on Functions Using Randomized Vector Representations
- Authors: E. Paxon Frady, Denis Kleyko, Christopher J. Kymn, Bruno A. Olshausen,
Friedrich T. Sommer
- Abstract summary: We call this new function encoding and computing framework Vector Function Architecture (VFA)
Our analyses and results suggest that VFAs constitute a powerful new framework for representing and manipulating functions in distributed neural systems.
- Score: 4.066849397181077
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Vector space models for symbolic processing that encode symbols by random
vectors have been proposed in cognitive science and connectionist communities
under the names Vector Symbolic Architecture (VSA), and, synonymously,
Hyperdimensional (HD) computing. In this paper, we generalize VSAs to function
spaces by mapping continuous-valued data into a vector space such that the
inner product between the representations of any two data points represents a
similarity kernel. By analogy to VSA, we call this new function encoding and
computing framework Vector Function Architecture (VFA). In VFAs, vectors can
represent individual data points as well as elements of a function space (a
reproducing kernel Hilbert space). The algebraic vector operations, inherited
from VSA, correspond to well-defined operations in function space. Furthermore,
we study a previously proposed method for encoding continuous data, fractional
power encoding (FPE), which uses exponentiation of a random base vector to
produce randomized representations of data points and fulfills the kernel
properties for inducing a VFA. We show that the distribution from which
elements of the base vector are sampled determines the shape of the FPE kernel,
which in turn induces a VFA for computing with band-limited functions. In
particular, VFAs provide an algebraic framework for implementing large-scale
kernel machines with random features, extending Rahimi and Recht, 2007.
Finally, we demonstrate several applications of VFA models to problems in image
recognition, density estimation and nonlinear regression. Our analyses and
results suggest that VFAs constitute a powerful new framework for representing
and manipulating functions in distributed neural systems, with myriad
applications in artificial intelligence.
Related papers
- Neural Clustering based Visual Representation Learning [61.72646814537163]
Clustering is one of the most classic approaches in machine learning and data analysis.
We propose feature extraction with clustering (FEC), which views feature extraction as a process of selecting representatives from data.
FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives.
arXiv Detail & Related papers (2024-03-26T06:04:50Z) - HyperVQ: MLR-based Vector Quantization in Hyperbolic Space [56.4245885674567]
We study the use of hyperbolic spaces for vector quantization (HyperVQ)
We show that hyperVQ performs comparably in reconstruction and generative tasks while outperforming VQ in discriminative tasks and learning a highly disentangled latent space.
arXiv Detail & Related papers (2024-03-18T03:17:08Z) - Function Vectors in Large Language Models [45.267194267587435]
We report the presence of a simple neural mechanism that represents an input-output function as a vector within autoregressive transformer language models (LMs)
Using causal mediation analysis on a diverse range of in-context-learning (ICL) tasks, we find that a small number attention heads transport a compact representation of the demonstrated task, which we call a function vector (FV)
arXiv Detail & Related papers (2023-10-23T17:55:24Z) - New Equivalences Between Interpolation and SVMs: Kernels and Structured
Features [22.231455330003328]
We present a new and flexible analysis framework for proving SVP in an arbitrary kernel reproducing Hilbert space with a flexible class of generative models for the labels.
We show that SVP occurs in many interesting settings not covered by prior work, and we leverage these results to prove novel generalization results for kernel SVM classification.
arXiv Detail & Related papers (2023-05-03T17:52:40Z) - Neural Vector Fields: Implicit Representation by Explicit Learning [63.337294707047036]
We propose a novel 3D representation method, Neural Vector Fields (NVF)
It not only adopts the explicit learning process to manipulate meshes directly, but also the implicit representation of unsigned distance functions (UDFs)
Our method first predicts displacement queries towards the surface and models shapes as text reconstructions.
arXiv Detail & Related papers (2023-03-08T02:36:09Z) - Learning Implicit Feature Alignment Function for Semantic Segmentation [51.36809814890326]
Implicit Feature Alignment function (IFA) is inspired by the rapidly expanding topic of implicit neural representations.
We show that IFA implicitly aligns the feature maps at different levels and is capable of producing segmentation maps in arbitrary resolutions.
Our method can be combined with improvement on various architectures, and it achieves state-of-the-art accuracy trade-off on common benchmarks.
arXiv Detail & Related papers (2022-06-17T09:40:14Z) - Resonator networks for factoring distributed representations of data
structures [3.46969645559477]
We show how data structures are encoded by combining high-dimensional vectors with operations that together form an algebra on the space of distributed representations.
Our proposed algorithm, called a resonator network, is a new type of recurrent neural network that interleaves VSA multiplication operations and pattern completion.
Re resonator networks open the possibility to apply VSAs to myriad artificial intelligence problems in real-world domains.
arXiv Detail & Related papers (2020-07-07T19:24:27Z) - An End-to-End Graph Convolutional Kernel Support Vector Machine [0.0]
A kernel-based support vector machine (SVM) for graph classification is proposed.
The proposed model is trained in a supervised end-to-end manner.
Experimental results demonstrate that the proposed model outperforms existing deep learning baseline models on a number of datasets.
arXiv Detail & Related papers (2020-02-29T09:57:42Z) - Invariant Feature Coding using Tensor Product Representation [75.62232699377877]
We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
arXiv Detail & Related papers (2019-06-05T07:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.