PruneSymNet: A Symbolic Neural Network and Pruning Algorithm for
Symbolic Regression
- URL: http://arxiv.org/abs/2401.15103v1
- Date: Thu, 25 Jan 2024 11:53:35 GMT
- Title: PruneSymNet: A Symbolic Neural Network and Pruning Algorithm for
Symbolic Regression
- Authors: Min Wu, Weijun Li, Lina Yu, Wenqiang Li, Jingyi Liu, Yanjie Li, Meilan
Hao
- Abstract summary: Symbolic regression aims to derive interpretable symbolic expressions from data in order to better understand and interpret data.
In this study, a symbolic network called PruneSymNet is proposed for symbolic regression.
A greedy pruning algorithm is proposed to prune the network into a subnetwork while ensuring the accuracy of data fitting.
- Score: 14.38941096136575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Symbolic regression aims to derive interpretable symbolic expressions from
data in order to better understand and interpret data. %which plays an
important role in knowledge discovery and interpretable machine learning.
In this study, a symbolic network called PruneSymNet is proposed for symbolic
regression. This is a novel neural network whose activation function consists
of common elementary functions and operators. The whole network is
differentiable and can be trained by gradient descent method. Each subnetwork
in the network corresponds to an expression, and our goal is to extract such
subnetworks to get the desired symbolic expression.
Therefore, a greedy pruning algorithm is proposed to prune the network into a
subnetwork while ensuring the accuracy of data fitting. The proposed greedy
pruning algorithm preserves the edge with the least loss in each pruning, but
greedy algorithm often can not get the optimal solution. In order to alleviate
this problem, we combine beam search during pruning to obtain multiple
candidate expressions each time, and finally select the expression with the
smallest loss as the final result. It was tested on the public data set and
compared with the current popular algorithms. The results showed that the
proposed algorithm had better accuracy.
Related papers
- Graph Convolutional Branch and Bound [1.8966938152549224]
This article demonstrates the effectiveness of employing a deep learning model in an optimization pipeline.
In this context, neural networks can be leveraged to rapidly acquire valuable information.
arXiv Detail & Related papers (2024-06-05T09:42:43Z) - Provable Data Subset Selection For Efficient Neural Network Training [73.34254513162898]
We introduce the first algorithm to construct coresets for emphRBFNNs, i.e., small weighted subsets that approximate the loss of the input data on any radial basis function network.
We then perform empirical evaluations on function approximation and dataset subset selection on popular network architectures and data sets.
arXiv Detail & Related papers (2023-03-09T10:08:34Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - Alleviate Exposure Bias in Sequence Prediction \\ with Recurrent Neural
Networks [47.52214243454995]
A popular strategy to train recurrent neural networks (RNNs) is to take the ground truth as input at each time step.
We propose a fully differentiable training algorithm for RNNs to better capture long-term dependencies.
arXiv Detail & Related papers (2021-03-22T06:15:22Z) - Rotation Averaging with Attention Graph Neural Networks [4.408728798697341]
We propose a real-time and robust solution to large-scale multiple rotation averaging.
Our method uses all observations, suppressing outliers effects through the use of weighted averaging and an attention mechanism within the network design.
The result is a network that is faster, more robust and can be trained with less samples than the previous neural approach.
arXiv Detail & Related papers (2020-10-14T02:07:19Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Pruning neural networks without any data by iteratively conserving
synaptic flow [27.849332212178847]
Pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy.
Recent works have identified, through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainableworks.
We provide an affirmative answer to this question through theory driven algorithm design.
arXiv Detail & Related papers (2020-06-09T19:21:57Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.