Wavelet Logic Machines: Learning and Reasoning in the Spectral Domain Without Neural Networks
- URL: http://arxiv.org/abs/2507.19514v1
- Date: Fri, 18 Jul 2025 01:28:17 GMT
- Title: Wavelet Logic Machines: Learning and Reasoning in the Spectral Domain Without Neural Networks
- Authors: Andrew Kiruluta,
- Abstract summary: We introduce a fully spectral learning framework that eliminates traditional neural layers by operating entirely in the wavelet domain.<n>The model applies learnable nonlinear transformations, including soft-thresholding and gain-phase modulation, directly to wavelet coefficients.<n>It also includes a differentiable wavelet basis selection mechanism, enabling adaptive processing using families such as Haar, Daubechies, and Biorthogonal wavelets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a fully spectral learning framework that eliminates traditional neural layers by operating entirely in the wavelet domain. The model applies learnable nonlinear transformations, including soft-thresholding and gain-phase modulation, directly to wavelet coefficients. It also includes a differentiable wavelet basis selection mechanism, enabling adaptive processing using families such as Haar, Daubechies, and Biorthogonal wavelets. Implemented in PyTorch with full 3D support, the model maintains a spectral pipeline without spatial convolutions or attention. On synthetic 3D denoising and natural language tasks from the GLUE benchmark, including SST-2 sentiment classification, the model achieves 89.3 percent accuracy, close to a 4-layer Transformer baseline (90.1 percent), while using 72 percent fewer parameters and 58 percent less peak memory. Faster early convergence is observed due to spectral sparsity priors. In contrast to the quadratic complexity of self-attention and large matrix multiplications in Transformers, our approach uses linear-time wavelet transforms and pointwise nonlinearities, significantly reducing inference cost. This yields a compact, interpretable, and efficient alternative to neural models. Our results support the viability of principled spectral learning in both vision and language tasks, offering new directions for model design without overparameterized architectures.
Related papers
- A Stable Whitening Optimizer for Efficient Neural Network Training [101.89246340672246]
Building on the Shampoo family of algorithms, we identify and alleviate three key issues, resulting in the proposed SPlus method.<n>First, we find that naive Shampoo is prone to divergence when matrix-inverses are cached for long periods.<n>Second, we adapt a shape-aware scaling to enable learning rate transfer across network width.<n>Third, we find that high learning rates result in large parameter noise, and propose a simple iterate-averaging scheme which unblocks faster learning.
arXiv Detail & Related papers (2025-06-08T18:43:31Z) - 3D Wavelet Convolutions with Extended Receptive Fields for Hyperspectral Image Classification [12.168520751389622]
Deep neural networks face numerous challenges in hyperspectral image classification.<n>This paper proposes WCNet, an improved 3D-DenseNet model integrated with wavelet transforms.<n> Experimental results demonstrate superior performance on the IN, UP, and KSC datasets.
arXiv Detail & Related papers (2025-04-15T01:39:42Z) - LOGLO-FNO: Efficient Learning of Local and Global Features in Fourier Neural Operators [20.77877474840923]
High-frequency information is a critical challenge in machine learning.<n>Deep neural nets exhibit the so-called spectral bias toward learning low-frequency components.<n>We propose a novel frequency-sensitive loss term based on radially binned spectral errors.
arXiv Detail & Related papers (2025-04-05T19:35:04Z) - WaveFormer: A 3D Transformer with Wavelet-Driven Feature Representation for Efficient Medical Image Segmentation [0.5312470855079862]
We present WaveFormer, a novel 3D-transformer for medical images.<n>It is inspired by the top-down mechanism of the human visual recognition system.<n>It preserves both global context and high-frequency details while replacing heavy upsampling layers with efficient wavelet-based summarization and reconstruction.
arXiv Detail & Related papers (2025-03-31T06:28:41Z) - Point Cloud Denoising With Fine-Granularity Dynamic Graph Convolutional Networks [58.050130177241186]
Noise perturbations often corrupt 3-D point clouds, hindering downstream tasks such as surface reconstruction, rendering, and further processing.
This paper introduces finegranularity dynamic graph convolutional networks called GDGCN, a novel approach to denoising in 3-D point clouds.
arXiv Detail & Related papers (2024-11-21T14:19:32Z) - Geometric Algebra Planes: Convex Implicit Neural Volumes [70.12234371845445]
We show that GA-Planes is equivalent to a sparse low-rank factor plus low-resolution matrix.
We also show that GA-Planes can be adapted for many existing representations.
arXiv Detail & Related papers (2024-11-20T18:21:58Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - Deep Spectral Meshes: Multi-Frequency Facial Mesh Processing with Graph
Neural Networks [1.170907599257096]
spectral meshes are introduced as a method to decompose mesh deformations into low- and high-frequency deformations.
A parametric model for 3D facial mesh synthesis is built upon the proposed framework.
Our model takes further advantage of spectral partitioning by representing different frequency levels with disparate, more suitable representations.
arXiv Detail & Related papers (2024-02-15T23:17:08Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - FAMLP: A Frequency-Aware MLP-Like Architecture For Domain Generalization [73.41395947275473]
We propose a novel frequency-aware architecture, in which the domain-specific features are filtered out in the transformed frequency domain.
Experiments on three benchmarks demonstrate significant performance, outperforming the state-of-the-art methods by a margin of 3%, 4% and 9%, respectively.
arXiv Detail & Related papers (2022-03-24T07:26:29Z) - High-Fidelity and Low-Latency Universal Neural Vocoder based on
Multiband WaveRNN with Data-Driven Linear Prediction for Discrete Waveform
Modeling [38.828260316517536]
This paper presents a novel universal neural vocoder framework based on multiband WaveRNN with data-driven linear prediction for discrete waveform modeling (MWDLP)
Experiments demonstrate that the proposed MWDLP framework generates high-fidelity synthetic speech for seen and unseen speakers and/or language on 300 speakers training data including clean and noisy/reverberant conditions.
arXiv Detail & Related papers (2021-05-20T16:02:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.