Maximum margin learning of t-SPNs for cell classification with filtered
input
- URL: http://arxiv.org/abs/2303.09065v3
- Date: Tue, 21 Mar 2023 02:15:48 GMT
- Title: Maximum margin learning of t-SPNs for cell classification with filtered
input
- Authors: Haeyong Kang, Chang D. Yoo, Yongcheon Na
- Abstract summary: The t-SPN architecture is learned by maximizing the margin.
L2-regularization (REG) is considered along with the maximum margin (MM) criterion in the learning process.
On both HEp-2 and Feulgen benchmark datasets, the t-SPN architecture learned based on the max-margin criterion with regularization produced the highest accuracy rate.
- Score: 19.66983830788521
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An algorithm based on a deep probabilistic architecture referred to as a
tree-structured sum-product network (t-SPN) is considered for cell
classification. The t-SPN is constructed such that the unnormalized probability
is represented as conditional probabilities of a subset of most similar cell
classes. The constructed t-SPN architecture is learned by maximizing the
margin, which is the difference in the conditional probability between the true
and the most competitive false label. To enhance the generalization ability of
the architecture, L2-regularization (REG) is considered along with the maximum
margin (MM) criterion in the learning process. To highlight cell features, this
paper investigates the effectiveness of two generic high-pass filters: ideal
high-pass filtering and the Laplacian of Gaussian (LOG) filtering. On both
HEp-2 and Feulgen benchmark datasets, the t-SPN architecture learned based on
the max-margin criterion with regularization produced the highest accuracy rate
compared to other state-of-the-art algorithms that include convolutional neural
network (CNN) based algorithms. The ideal high-pass filter was more effective
on the HEp-2 dataset, which is based on immunofluorescence staining, while the
LOG was more effective on the Feulgen dataset, which is based on Feulgen
staining.
Related papers
- First line of defense: A robust first layer mitigates adversarial attacks [9.416382037694424]
We show that a carefully designed first layer of the neural network can serve as an implicit adversarial noise filter (ANF)
This filter is created using a combination of large kernel size, increased convolution filters, and a maxpool operation.
We show that integrating this filter as the first layer in architectures such as ResNet, VGG, and EfficientNet results in adversarially robust networks.
arXiv Detail & Related papers (2024-08-21T15:00:16Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Focus Your Attention (with Adaptive IIR Filters) [62.80628327613344]
We present a new layer in which dynamic (i.e.,input-dependent) Infinite Impulse Response (IIR) filters of order two are used to process the input sequence.
Despite their relatively low order, the causal adaptive filters are shown to focus attention on the relevant sequence elements.
arXiv Detail & Related papers (2023-05-24T09:42:30Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Self-grouping Convolutional Neural Networks [30.732298624941738]
We propose a novel method of designing self-grouping convolutional neural networks, called SG-CNN.
For each filter, we first evaluate the importance value of their input channels to identify the importance vectors.
Using the resulting emphdata-dependent centroids, we prune the less important connections, which implicitly minimizes the accuracy loss of the pruning.
arXiv Detail & Related papers (2020-09-29T06:24:32Z) - Learning Sparse Filters in Deep Convolutional Neural Networks with a
l1/l2 Pseudo-Norm [5.3791844634527495]
Deep neural networks (DNNs) have proven to be efficient for numerous tasks, but come at a high memory and computation cost.
Recent research has shown that their structure can be more compact without compromising their performance.
We present a sparsity-inducing regularization term based on the ratio l1/l2 pseudo-norm defined on the filter coefficients.
arXiv Detail & Related papers (2020-07-20T11:56:12Z) - A Neural Network Approach for Online Nonlinear Neyman-Pearson
Classification [3.6144103736375857]
We propose a novel Neyman-Pearson (NP) classifier that is both online and nonlinear as the first time in the literature.
The proposed classifier operates on a binary labeled data stream in an online manner, and maximizes the detection power about a user-specified and controllable false positive rate.
Our algorithm is appropriate for large scale data applications and provides a decent false positive rate controllability with real time processing.
arXiv Detail & Related papers (2020-06-14T20:00:25Z) - Novel Adaptive Binary Search Strategy-First Hybrid Pyramid- and
Clustering-Based CNN Filter Pruning Method without Parameters Setting [3.7468898363447654]
Pruning redundant filters in CNN models has received growing attention.
We propose an adaptive binary search-first hybrid pyramid- and clustering-based (ABS HPC) method for pruning filters automatically.
Based on the practical dataset and the CNN models, with higher accuracy, the thorough experimental results demonstrated the significant parameters and floating-point operations reduction merits of the proposed filter pruning method.
arXiv Detail & Related papers (2020-06-08T10:09:43Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z) - RNA Secondary Structure Prediction By Learning Unrolled Algorithms [70.09461537906319]
In this paper, we propose an end-to-end deep learning model, called E2Efold, for RNA secondary structure prediction.
The key idea of E2Efold is to directly predict the RNA base-pairing matrix, and use an unrolled algorithm for constrained programming as the template for deep architectures to enforce constraints.
With comprehensive experiments on benchmark datasets, we demonstrate the superior performance of E2Efold.
arXiv Detail & Related papers (2020-02-13T23:21:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.