Objective Value Change and Shape-Based Accelerated Optimization for the Neural Network Approximation
- URL: http://arxiv.org/abs/2508.20290v1
- Date: Wed, 27 Aug 2025 21:51:54 GMT
- Title: Objective Value Change and Shape-Based Accelerated Optimization for the Neural Network Approximation
- Authors: Pengcheng Xie, Zihao Zhou, Zijian Zhou,
- Abstract summary: This paper introduces a novel metric of an objective function f, we say VC (value change) to measure the difficulty and approximation affection when conducting an neural network approximation task.<n> VC addresses this issue by providing a quantifiable measure of local value changes in network behavior.<n>In addition, we propose a novel metric based on VC, which measures the distance between two functions from the perspective of variation.
- Score: 15.818533563804216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduce a novel metric of an objective function f, we say VC (value change) to measure the difficulty and approximation affection when conducting an neural network approximation task, and it numerically supports characterizing the local performance and behavior of neural network approximation. Neural networks often suffer from unpredictable local performance, which can hinder their reliability in critical applications. VC addresses this issue by providing a quantifiable measure of local value changes in network behavior, offering insights into the stability and performance for achieving the neural-network approximation. We investigate some fundamental theoretical properties of VC and identified two intriguing phenomena in neural network approximation: the VC-tendency and the minority-tendency. These trends respectively characterize how pointwise errors evolve in relation to the distribution of VC during the approximation process.In addition, we propose a novel metric based on VC, which measures the distance between two functions from the perspective of variation. Building upon this metric, we further propose a new preprocessing framework for neural network approximation. Numerical results including the real-world experiment and the PDE-related scientific problem support our discovery and pre-processing acceleration method.
Related papers
- Multiresolution Analysis and Statistical Thresholding on Dynamic Networks [49.09073800467438]
ANIE (Adaptive Network Intensity Estimation) is a multi-resolution framework designed to automatically identify the time scales at which network structure evolves.<n>We show that ANIE adapts to the appropriate time resolution and is able to capture sharp structural changes while remaining robust to noise.
arXiv Detail & Related papers (2025-06-01T22:55:55Z) - Certified Neural Approximations of Nonlinear Dynamics [52.79163248326912]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Quantification of Uncertainties in Probabilistic Deep Neural Network by Implementing Boosting of Variational Inference [0.38366697175402226]
Boosted Bayesian Neural Networks (BBNN) is a novel approach that enhances neural network weight distribution approximations.<n>BBNN achieves 5% higher accuracy compared to conventional neural networks.
arXiv Detail & Related papers (2025-03-18T05:11:21Z) - Imitation Learning of MPC with Neural Networks: Error Guarantees and Sparsification [5.260346080244568]
We present a framework for bounding the approximation error in imitation model predictive controllers utilizing neural networks.<n>We discuss how this method can be used to design a stable neural network controller with performance guarantees.
arXiv Detail & Related papers (2025-01-07T10:18:37Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.<n>A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.<n>The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - SEF: A Method for Computing Prediction Intervals by Shifting the Error Function in Neural Networks [0.0]
SEF (Shifting the Error Function) method presented in this paper is a new method that belongs to this category of methods.
The proposed approach involves training a single neural network three times, thus generating an estimate along with the corresponding upper and lower bounds for a given problem.
This innovative process effectively produces PIs, resulting in a robust and efficient technique for uncertainty quantification.
arXiv Detail & Related papers (2024-09-08T19:46:45Z) - Spiking Neural Networks with Consistent Mapping Relations Allow High-Accuracy Inference [9.667807887916132]
Spike-based neuromorphic hardware has demonstrated substantial potential in low energy consumption and efficient inference.
Direct training of deep spiking neural networks is challenging, and conversion-based methods still require substantial time delay owing to unresolved conversion errors.
arXiv Detail & Related papers (2024-06-08T06:40:00Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Conductivity Imaging from Internal Measurements with Mixed Least-Squares
Deep Neural Networks [4.228167013618626]
We develop a novel approach using deep neural networks to reconstruct the conductivity distribution in elliptic problems.
We provide a thorough analysis of the deep neural network approximations of the conductivity for both continuous and empirical losses.
arXiv Detail & Related papers (2023-03-29T04:43:03Z) - Variational Voxel Pseudo Image Tracking [127.46919555100543]
Uncertainty estimation is an important task for critical problems, such as robotics and autonomous driving.
We propose a Variational Neural Network-based version of a Voxel Pseudo Image Tracking (VPIT) method for 3D Single Object Tracking.
arXiv Detail & Related papers (2023-02-12T13:34:50Z) - Approximation Power of Deep Neural Networks: an explanatory mathematical survey [0.0]
The survey examines how effectively neural networks approximate target functions and to identify conditions under which they outperform traditional approximation methods.<n>Key topics include the nonlinear, compositional structure of deep networks and the formalization of neural network tasks as optimization problems in regression and classification settings.<n>The survey explores the density of neural networks in the space of continuous functions, comparing the approximation capabilities of deep ReLU networks with those of other approximation methods.
arXiv Detail & Related papers (2022-07-19T18:47:44Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.