Smooth Integer Encoding via Integral Balance
- URL: http://arxiv.org/abs/2505.02259v1
- Date: Mon, 28 Apr 2025 20:23:53 GMT
- Title: Smooth Integer Encoding via Integral Balance
- Authors: Stanislav Semenov,
- Abstract summary: We introduce a novel method for encoding using smooth real-valued functions.<n>Our approach encodes the number N in the set of natural numbers through the cumulative balance of a smooth function f_N(t)<n>The total integral I(N) converges to zero as N tends to infinity, and the integer can be recovered as the minimal point of near-cancellation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel method for encoding integers using smooth real-valued functions whose integral properties implicitly reflect discrete quantities. In contrast to classical representations, where the integer appears as an explicit parameter, our approach encodes the number N in the set of natural numbers through the cumulative balance of a smooth function f_N(t), constructed from localized Gaussian bumps with alternating and decaying coefficients. The total integral I(N) converges to zero as N tends to infinity, and the integer can be recovered as the minimal point of near-cancellation. This method enables continuous and differentiable representations of discrete states, supports recovery through spline-based or analytical inversion, and extends naturally to multidimensional tuples (N1, N2, ...). We analyze the structure and convergence of the encoding series, demonstrate numerical construction of the integral map I(N), and develop procedures for integer recovery via numerical inversion. The resulting framework opens a path toward embedding discrete logic within continuous optimization pipelines, machine learning architectures, and smooth symbolic computation.
Related papers
- LFA applied to CNNs: Efficient Singular Value Decomposition of Convolutional Mappings by Local Fourier Analysis [4.69726714177332]
singular values of convolutional mappings encode interesting spectral properties.<n> computation of singular values is typically very resource-intensive.<n>We propose an approach of complexity O(N) based on local Fourier analysis.
arXiv Detail & Related papers (2025-06-05T22:10:01Z) - Calibrating Neural Networks' parameters through Optimal Contraction in a Prediction Problem [0.0]
The paper details how a recurrent neural networks (RNN) can be transformed into a contraction in a domain where its parameters are linear.
It then demonstrates that a prediction problem modeled through an RNN, with a specific regularization term in the loss function, can have its first-order conditions expressed analytically.
We establish that, if certain conditions are met, optimal parameters exist, and can be found through a straightforward algorithm to any desired precision.
arXiv Detail & Related papers (2024-06-15T18:08:04Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Deep ReLU networks and high-order finite element methods II: Chebyshev emulation [0.0]
We show expression rates and stability in Sobolev norms of deep feedforward ReLU neural networks (NNs)
Novel constructions of ReLU NN surrogates encoding function approximations in terms of Chebyshev expansion coefficients are developed.
Bounds on expression rates and stability are obtained that are superior to those of constructions based on ReLU NN emulations of monomials.
arXiv Detail & Related papers (2023-10-11T07:38:37Z) - An inexact LPA for DC composite optimization and application to matrix completions with outliers [5.746154410100363]
This paper concerns a class of composite optimization problems.
By leveraging the composite structure, we provide a condition for the potential function to have the KL property of $1/2$ at the iterate sequence.
arXiv Detail & Related papers (2023-03-29T16:15:34Z) - D4FT: A Deep Learning Approach to Kohn-Sham Density Functional Theory [79.50644650795012]
We propose a deep learning approach to solve Kohn-Sham Density Functional Theory (KS-DFT)
We prove that such an approach has the same expressivity as the SCF method, yet reduces the computational complexity.
In addition, we show that our approach enables us to explore more complex neural-based wave functions.
arXiv Detail & Related papers (2023-03-01T10:38:10Z) - Score-based Diffusion Models in Function Space [137.70916238028306]
Diffusion models have recently emerged as a powerful framework for generative modeling.<n>This work introduces a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.<n>We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Projective Integral Updates for High-Dimensional Variational Inference [0.0]
Variational inference seeks to improve uncertainty in predictions by optimizing a simplified distribution over parameters to stand in for the full posterior.
This work introduces a fixed-point optimization for variational inference that is applicable when every feasible log density can be expressed as a linear combination of functions from a given basis.
A PyTorch implementation of QNVB allows for better control over model uncertainty during training than competing methods.
arXiv Detail & Related papers (2023-01-20T00:38:15Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Optimization-based Block Coordinate Gradient Coding for Mitigating
Partial Stragglers in Distributed Learning [58.91954425047425]
This paper aims to design a new gradient coding scheme for mitigating partial stragglers in distributed learning.
We propose a gradient coordinate coding scheme with L coding parameters representing L possibly different diversities for the L coordinates, which generates most gradient coding schemes.
arXiv Detail & Related papers (2022-06-06T09:25:40Z) - Formal Synthesis of Lyapunov Neural Networks [61.79595926825511]
We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
arXiv Detail & Related papers (2020-03-19T17:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.