Ensemble Learning based on Classifier Prediction Confidence and
Comprehensive Learning Particle Swarm Optimisation for polyp localisation
- URL: http://arxiv.org/abs/2104.04832v1
- Date: Sat, 10 Apr 2021 18:34:42 GMT
- Title: Ensemble Learning based on Classifier Prediction Confidence and
Comprehensive Learning Particle Swarm Optimisation for polyp localisation
- Authors: Truong Dang, Thanh Nguyen, John McCall, Alan Wee-Chung Liew
- Abstract summary: Colorectal cancer (CRC) is the first cause of death in many countries.
In this paper, we introduce an ensemble of medical polyp segmentation algorithms.
- Score: 6.212408891922064
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Colorectal cancer (CRC) is the first cause of death in many countries. CRC
originates from a small clump of cells on the lining of the colon called
polyps, which over time might grow and become malignant. Early detection and
removal of polyps are therefore necessary for the prevention of colon cancer.
In this paper, we introduce an ensemble of medical polyp segmentation
algorithms. Based on an observation that different segmentation algorithms will
perform well on different subsets of examples because of the nature and size of
training sets they have been exposed to and because of method-intrinsic
factors, we propose to measure the confidence in the prediction of each
algorithm and then use an associate threshold to determine whether the
confidence is acceptable or not. An algorithm is selected for the ensemble if
the confidence is below its associate threshold. The optimal threshold for each
segmentation algorithm is found by using Comprehensive Learning Particle Swarm
Optimization (CLPSO), a swarm intelligence algorithm. The Dice coefficient, a
popular performance metric for image segmentation, is used as the fitness
criteria. Experimental results on two polyp segmentation datasets MICCAI2015
and Kvasir-SEG confirm that our ensemble achieves better results compared to
some well-known segmentation algorithms.
Related papers
- BetterNet: An Efficient CNN Architecture with Residual Learning and Attention for Precision Polyp Segmentation [0.6062751776009752]
This research presents BetterNet, a convolutional neural network architecture that combines residual learning and attention methods to enhance the accuracy of polyp segmentation.
BetterNet shows promise in integrating computer-assisted diagnosis techniques to enhance the detection of polyps and the early recognition of cancer.
arXiv Detail & Related papers (2024-05-05T21:08:49Z) - Optimal estimation of Gaussian (poly)trees [25.02920605955238]
We consider both problems of distribution learning (i.e. in KL distance) and structure learning (i.e. exact recovery)
The first approach is based on the Chow-Liu algorithm, and learns an optimal tree-structured distribution efficiently.
The second approach is a modification of the PC algorithm for polytrees that uses partial correlation as a conditional independence tester for constraint-based structure learning.
arXiv Detail & Related papers (2024-02-09T12:58:36Z) - Rethinking k-means from manifold learning perspective [122.38667613245151]
We present a new clustering algorithm which directly detects clusters of data without mean estimation.
Specifically, we construct distance matrix between data points by Butterworth filter.
To well exploit the complementary information embedded in different views, we leverage the tensor Schatten p-norm regularization.
arXiv Detail & Related papers (2023-05-12T03:01:41Z) - Efficient Approximate Kernel Based Spike Sequence Classification [56.2938724367661]
Machine learning models, such as SVM, require a definition of distance/similarity between pairs of sequences.
Exact methods yield better classification performance, but they pose high computational costs.
We propose a series of ways to improve the performance of the approximate kernel in order to enhance its predictive performance.
arXiv Detail & Related papers (2022-09-11T22:44:19Z) - Alternating Mahalanobis Distance Minimization for Stable and Accurate CP
Decomposition [4.847980206213335]
We introduce a new formulation for deriving singular values and vectors of a tensor by considering the critical points of a function different from what is used in the previous work.
We show that a subsweep of this algorithm can achieve a superlinear convergence rate for exact CPD with known rank.
We then view the algorithm as optimizing a Mahalanobis distance with respect to each factor with ground metric dependent on the other factors.
arXiv Detail & Related papers (2022-04-14T19:56:36Z) - Optimal Clustering with Bandit Feedback [57.672609011609886]
This paper considers the problem of online clustering with bandit feedback.
It includes a novel stopping rule for sequential testing that circumvents the need to solve any NP-hard weighted clustering problem as its subroutines.
We show through extensive simulations on synthetic and real-world datasets that BOC's performance matches the lower boundally, and significantly outperforms a non-adaptive baseline algorithm.
arXiv Detail & Related papers (2022-02-09T06:05:05Z) - Mean-based Best Arm Identification in Stochastic Bandits under Reward
Contamination [80.53485617514707]
This paper proposes two algorithms, a gap-based algorithm and one based on the successive elimination, for best arm identification in sub-Gaussian bandits.
Specifically, for the gap-based algorithm, the sample complexity is optimal up to constant factors, while for the successive elimination, it is optimal up to logarithmic factors.
arXiv Detail & Related papers (2021-11-14T21:49:58Z) - Determinantal consensus clustering [77.34726150561087]
We propose the use of determinantal point processes or DPP for the random restart of clustering algorithms.
DPPs favor diversity of the center points within subsets.
We show through simulations that, contrary to DPP, this technique fails both to ensure diversity, and to obtain a good coverage of all data facets.
arXiv Detail & Related papers (2021-02-07T23:48:24Z) - Differentially Private Clustering: Tight Approximation Ratios [57.89473217052714]
We give efficient differentially private algorithms for basic clustering problems.
Our results imply an improved algorithm for the Sample and Aggregate privacy framework.
One of the tools used in our 1-Cluster algorithm can be employed to get a faster quantum algorithm for ClosestPair in a moderate number of dimensions.
arXiv Detail & Related papers (2020-08-18T16:22:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.