A Neural Network Approach for Online Nonlinear Neyman-Pearson
Classification
- URL: http://arxiv.org/abs/2006.08001v2
- Date: Sun, 30 Aug 2020 12:12:54 GMT
- Title: A Neural Network Approach for Online Nonlinear Neyman-Pearson
Classification
- Authors: Basarbatu Can, Huseyin Ozkan
- Abstract summary: We propose a novel Neyman-Pearson (NP) classifier that is both online and nonlinear as the first time in the literature.
The proposed classifier operates on a binary labeled data stream in an online manner, and maximizes the detection power about a user-specified and controllable false positive rate.
Our algorithm is appropriate for large scale data applications and provides a decent false positive rate controllability with real time processing.
- Score: 3.6144103736375857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel Neyman-Pearson (NP) classifier that is both online and
nonlinear as the first time in the literature. The proposed classifier operates
on a binary labeled data stream in an online manner, and maximizes the
detection power about a user-specified and controllable false positive rate.
Our NP classifier is a single hidden layer feedforward neural network (SLFN),
which is initialized with random Fourier features (RFFs) to construct the
kernel space of the radial basis function at its hidden layer with sinusoidal
activation. Not only does this use of RFFs provide an excellent initialization
with great nonlinear modeling capability, but it also exponentially reduces the
parameter complexity and compactifies the network to mitigate overfitting while
improving the processing efficiency substantially. We sequentially learn the
SLFN with stochastic gradient descent updates based on a Lagrangian NP
objective. As a result, we obtain an expedited online adaptation and powerful
nonlinear Neyman-Pearson modeling. Our algorithm is appropriate for large scale
data applications and provides a decent false positive rate controllability
with real time processing since it only has O(N) computational and O(1) space
complexity (N: number of data instances). In our extensive set of experiments
on several real datasets, our algorithm is highly superior over the competing
state-of-the-art techniques, either by outperforming in terms of the NP
classification objective with a comparable computational as well as space
complexity or by achieving a comparable performance with significantly lower
complexity.
Related papers
- FFEINR: Flow Feature-Enhanced Implicit Neural Representation for
Spatio-temporal Super-Resolution [4.577685231084759]
This paper proposes a Feature-Enhanced Neural Implicit Representation (FFEINR) for super-resolution of flow field data.
It can take full advantage of the implicit neural representation in terms of model structure and sampling resolution.
The training process of FFEINR is facilitated by introducing feature enhancements for the input layer.
arXiv Detail & Related papers (2023-08-24T02:28:18Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - CDiNN -Convex Difference Neural Networks [0.8122270502556374]
Neural networks with ReLU activation function have been shown to be universal function approximators learn function mapping as non-smooth functions.
New neural network architecture called ICNNs learn the output as a convex input.
arXiv Detail & Related papers (2021-03-31T17:31:16Z) - Message Passing Descent for Efficient Machine Learning [4.416484585765027]
We propose a new iterative optimization method for the bf Data-Fitting (DF) problem in Machine Learning.
The approach relies on bf Graphical Model representation of the DF problem.
We suggest the bf Message Passage Descent algorithm which relies on the piece-wise-polynomial representation of the model DF function.
arXiv Detail & Related papers (2021-02-16T12:22:54Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - An Online Learning Algorithm for a Neuro-Fuzzy Classifier with
Mixed-Attribute Data [9.061408029414455]
General fuzzy min-max neural network (GFMMNN) is one of the efficient neuro-fuzzy systems for data classification.
This paper proposes an extended online learning algorithm for the GFMMNN.
The proposed method can handle the datasets with both continuous and categorical features.
arXiv Detail & Related papers (2020-09-30T13:45:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.