An Empirical Study of Incremental Learning in Neural Network with Noisy
Training Set
- URL: http://arxiv.org/abs/2005.03266v1
- Date: Thu, 7 May 2020 06:09:31 GMT
- Title: An Empirical Study of Incremental Learning in Neural Network with Noisy
Training Set
- Authors: Shovik Ganguly, Atrayee Chatterjee, Debasmita Bhoumik, Ritajit
Majumdar
- Abstract summary: We numerically show that the accuracy of the algorithm is dependent more on the location of the error than the percentage of error.
Results show that the dependence of the accuracy with the location of error is independent of the algorithm.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The notion of incremental learning is to train an ANN algorithm in stages, as
and when newer training data arrives. Incremental learning is becoming
widespread in recent times with the advent of deep learning. Noise in the
training data reduces the accuracy of the algorithm. In this paper, we make an
empirical study of the effect of noise in the training phase. We numerically
show that the accuracy of the algorithm is dependent more on the location of
the error than the percentage of error. Using Perceptron, Feed Forward Neural
Network and Radial Basis Function Neural Network, we show that for the same
percentage of error, the accuracy of the algorithm significantly varies with
the location of error. Furthermore, our results show that the dependence of the
accuracy with the location of error is independent of the algorithm. However,
the slope of the degradation curve decreases with more sophisticated algorithms
Related papers
- SGD method for entropy error function with smoothing l0 regularization for neural networks [3.108634881604788]
entropy error function has been widely used in neural networks.
We propose a novel entropy function with smoothing l0 regularization for feed-forward neural networks.
Our work is novel as it enables neural networks to learn effectively, producing more accurate predictions.
arXiv Detail & Related papers (2024-05-28T19:54:26Z) - Towards Diverse Evaluation of Class Incremental Learning: A Representation Learning Perspective [67.45111837188685]
Class incremental learning (CIL) algorithms aim to continually learn new object classes from incrementally arriving data.
We experimentally analyze neural network models trained by CIL algorithms using various evaluation protocols in representation learning.
arXiv Detail & Related papers (2022-06-16T11:44:11Z) - Refining neural network predictions using background knowledge [68.35246878394702]
We show we can use logical background knowledge in learning system to compensate for a lack of labeled training data.
We introduce differentiable refinement functions that find a corrected prediction close to the original prediction.
This algorithm finds optimal refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not.
arXiv Detail & Related papers (2022-06-10T10:17:59Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Network Gradient Descent Algorithm for Decentralized Federated Learning [0.2867517731896504]
We study a fully decentralized federated learning algorithm, which is a novel descent gradient algorithm executed on a communication-based network.
In the NGD method, only statistics (e.g., parameter estimates) need to be communicated, minimizing the risk of privacy.
We find that both the learning rate and the network structure play significant roles in determining the NGD estimator's statistical efficiency.
arXiv Detail & Related papers (2022-05-06T02:53:31Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Demystifying Deep Neural Networks Through Interpretation: A Survey [3.566184392528658]
Modern deep learning algorithms tend to optimize an objective metric, such as minimize a cross entropy loss on a training dataset, to be able to learn.
The problem is that the single metric is an incomplete description of the real world tasks.
There are works done to tackle the problem of interpretability to provide insights into neural networks behavior and thought process.
arXiv Detail & Related papers (2020-12-13T17:56:41Z) - Improving Bayesian Network Structure Learning in the Presence of
Measurement Error [11.103936437655575]
This paper describes an algorithm that can be added as an additional learning phase at the end of any structure learning algorithm.
The proposed correction algorithm successfully improves the graphical score of four well-established structure learning algorithms.
arXiv Detail & Related papers (2020-11-19T11:27:47Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.