The Hyperdimensional Transform for Distributional Modelling, Regression
and Classification
- URL: http://arxiv.org/abs/2311.08150v1
- Date: Tue, 14 Nov 2023 13:26:49 GMT
- Title: The Hyperdimensional Transform for Distributional Modelling, Regression
and Classification
- Authors: Pieter Dewulf, Bernard De Baets, Michiel Stock
- Abstract summary: We present the power of the hyperdimensional transform to a broad data science audience.
We show how existing algorithms can be modified and how this transform can lead to a novel, well-founded toolbox.
- Score: 12.693238093510072
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperdimensional computing (HDC) is an increasingly popular computing
paradigm with immense potential for future intelligent applications. Although
the main ideas already took form in the 1990s, HDC recently gained significant
attention, especially in the field of machine learning and data science. Next
to efficiency, interoperability and explainability, HDC offers attractive
properties for generalization as it can be seen as an attempt to combine
connectionist ideas from neural networks with symbolic aspects. In recent work,
we introduced the hyperdimensional transform, revealing deep theoretical
foundations for representing functions and distributions as high-dimensional
holographic vectors. Here, we present the power of the hyperdimensional
transform to a broad data science audience. We use the hyperdimensional
transform as a theoretical basis and provide insight into state-of-the-art HDC
approaches for machine learning. We show how existing algorithms can be
modified and how this transform can lead to a novel, well-founded toolbox. Next
to the standard regression and classification tasks of machine learning, our
discussion includes various aspects of statistical modelling, such as
representation, learning and deconvolving distributions, sampling, Bayesian
inference, and uncertainty estimation.
Related papers
- Deep Generative Modeling Reshapes Compression and Transmission: From Efficiency to Resiliency [12.129722150469968]
We show the dual-functionality of deep generative models that reshapes both data compression for efficiency and transmission error concealment for resiliency.
We show that the kernel of many large generative models is powerful predictor that can capture complex relationships among semantic latent variables.
arXiv Detail & Related papers (2024-06-10T16:36:02Z) - Constrained Neural Networks for Interpretable Heuristic Creation to Optimise Computer Algebra Systems [2.8402080392117757]
We present a new methodology for utilising machine learning technology in symbolic computation research.
We explain how a well known human-designed variable ordering in cylindrical decomposition may be represented as a constrained neural network.
This allows us to then use machine learning methods to further optimise, leading to new networks of similar size as the original human-designed one.
arXiv Detail & Related papers (2024-04-26T16:20:04Z) - Neural Clustering based Visual Representation Learning [61.72646814537163]
Clustering is one of the most classic approaches in machine learning and data analysis.
We propose feature extraction with clustering (FEC), which views feature extraction as a process of selecting representatives from data.
FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives.
arXiv Detail & Related papers (2024-03-26T06:04:50Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Modeling Item Response Theory with Stochastic Variational Inference [8.369065078321215]
We introduce a variational Bayesian inference algorithm for Item Response Theory (IRT)
Applying this method to five large-scale item response datasets yields higher log likelihoods and higher accuracy in imputing missing data.
The algorithm implementation is open-source, and easily usable.
arXiv Detail & Related papers (2021-08-26T05:00:27Z) - Tensor Methods in Computer Vision and Deep Learning [120.3881619902096]
tensors, or multidimensional arrays, are data structures that can naturally represent visual data of multiple dimensions.
With the advent of the deep learning paradigm shift in computer vision, tensors have become even more fundamental.
This article provides an in-depth and practical review of tensors and tensor methods in the context of representation learning and deep learning.
arXiv Detail & Related papers (2021-07-07T18:42:45Z) - A Theoretical Perspective on Hyperdimensional Computing [17.50442191930551]
Hyperdimensional (HD) computing is a set of neurally inspired methods for obtaining high-dimensional, low-precision, distributed representations of data.
HD computing has recently garnered significant interest from the computer hardware community as an energy-efficient, low-latency, and noise-robust tool for solving learning problems.
arXiv Detail & Related papers (2020-10-14T22:39:11Z) - A New Perspective on Learning Context-Specific Independence [18.273290530700567]
Local structure such as context-specific independence (CSI) has received much attention in the probabilistic graphical model (PGM) literature.
In this paper, we provide a new perspective on how to learn CSIs from data.
arXiv Detail & Related papers (2020-06-12T01:11:02Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.