Improved Cleanup and Decoding of Fractional Power Encodings
- URL: http://arxiv.org/abs/2412.00488v1
- Date: Sat, 30 Nov 2024 14:10:48 GMT
- Title: Improved Cleanup and Decoding of Fractional Power Encodings
- Authors: Alicia Bremer, Jeff Orchard,
- Abstract summary: High-dimensional vectors have been proposed as a neural method for representing information in the brain using Vector Symbolic Algebras.<n>Previous work has explored decoding and cleaning up these vectors under the noise that arises during computation.<n>We present an iterative optimization method to decode and clean up Fourier Holographic Reduced Representation (FHRR) vectors.
- Score: 0.20718016474717196
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: High-dimensional vectors have been proposed as a neural method for representing information in the brain using Vector Symbolic Algebras (VSAs). While previous work has explored decoding and cleaning up these vectors under the noise that arises during computation, existing methods are limited. Cleanup methods are essential for robust computation within a VSA. However, cleanup methods for continuous-value encodings are not as effective. In this paper, we present an iterative optimization method to decode and clean up Fourier Holographic Reduced Representation (FHRR) vectors that are encoding continuous values. We combine composite likelihood estimation (CLE) and maximum likelihood estimation (MLE) to ensure convergence to the global optimum. We also demonstrate that this method can effectively decode FHRR vectors under different noise conditions, and show that it outperforms existing methods.
Related papers
- Efficient Anti-exploration via VQVAE and Fuzzy Clustering in Offline Reinforcement Learning [14.04169447103753]
Pseudo-count is an effective anti-exploration method in offline reinforcement learning (RL) by counting state-action pairs.<n>Existing anti-exploration methods count continuous state-action pairs by discretizing these data, but often suffer from issues of dimension disaster and information loss.<n>In this paper, a novel anti-exploration method based on Vector Quantized Variational Autoencoder (VQVAE) and fuzzy clustering is proposed.
arXiv Detail & Related papers (2026-02-08T09:42:06Z) - Action-List Reinforcement Learning Syndrome Decoding for Binary Linear Block Codes [3.3148826359547514]
We describe the methodology for mapping the iterative decoding process into Markov Decision Processes (MDPs)<n>A truncated MDP is proposed to reduce the number of states in the MDP by learning a Hamming ball with a specified radius around codewords.<n>We design an action-list decoder based on the Deep-Q network values that substantially enhance performance.
arXiv Detail & Related papers (2025-07-23T19:42:51Z) - DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation [68.19756761027351]
Diffusion large language models (dLLMs) are compelling alternatives to autoregressive (AR) models.<n>We investigate their denoising processes and reinforcement learning methods.<n>Our work provides deeper insight into the machinery of dLLM generation and offers an effective, diffusion-native RL training framework.
arXiv Detail & Related papers (2025-06-25T17:35:47Z) - Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - The Stochastic Conjugate Subgradient Algorithm For Kernel Support Vector Machines [1.738375118265695]
This paper proposes an innovative method specifically designed for kernel support vector machines (SVMs)
It not only achieves faster iteration per iteration but also exhibits enhanced convergence when compared to conventional SFO techniques.
Our experimental results demonstrate that the proposed algorithm not only maintains but potentially exceeds the scalability of SFO methods.
arXiv Detail & Related papers (2024-07-30T17:03:19Z) - Factor Graph Optimization of Error-Correcting Codes for Belief Propagation Decoding [62.25533750469467]
Low-Density Parity-Check (LDPC) codes possess several advantages over other families of codes.
The proposed approach is shown to outperform the decoding performance of existing popular codes by orders of magnitude.
arXiv Detail & Related papers (2024-06-09T12:08:56Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Nystrom Method for Accurate and Scalable Implicit Differentiation [25.29277451838466]
We show that the Nystrom method consistently achieves comparable or even superior performance to other approaches.
The proposed method avoids numerical instability and can be efficiently computed in matrix operations without iterations.
arXiv Detail & Related papers (2023-02-20T02:37:26Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Lightweight Projective Derivative Codes for Compressed Asynchronous
Gradient Descent [6.055286666916789]
This paper proposes a novel algorithm that encodes the partial derivatives themselves and furthermore optimize the codes by performing lossy compression on the derivative codewords.
The utility of this application of coding theory is a geometrical consequence of the observed fact in optimization research that noise is tolerable, sometimes even helpful, in gradient descent based learning algorithms.
arXiv Detail & Related papers (2022-01-31T04:08:53Z) - Practical Convex Formulation of Robust One-hidden-layer Neural Network
Training [12.71266194474117]
We show that the training of a one-hidden-layer, scalar-output fully-connected ReLULU neural network can be reformulated as a finite-dimensional convex program.
We derive a convex optimization approach to efficiently solve the "adversarial training" problem.
Our method can be applied to binary classification and regression, and provides an alternative to the current adversarial training methods.
arXiv Detail & Related papers (2021-05-25T22:06:27Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.