Convolution Neural Networks for Semantic Segmentation: Application to
Small Datasets of Biomedical Images
- URL: http://arxiv.org/abs/2011.01747v1
- Date: Sun, 1 Nov 2020 19:09:12 GMT
- Title: Convolution Neural Networks for Semantic Segmentation: Application to
Small Datasets of Biomedical Images
- Authors: Vitaly Nikolaev
- Abstract summary: This thesis studies how the segmentation results, produced by convolutional neural networks (CNN), is different from each other when applied to small biomedical datasets.
Two working datasets are from biomedical area of research.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This thesis studies how the segmentation results, produced by convolutional
neural networks (CNN), is different from each other when applied to small
biomedical datasets. We use different architectures, parameters and
hyper-parameters, trying to find out the better configurations for our task,
and trying to find out underlying regularities. Two working datasets are from
biomedical area of research. We conducted a lot of experiments with the two
types of networks and the received results have shown the preference of some
conditions of experiments and parameters of the networks over the others. All
testing results are given in the tables and some selected resulting graphs and
segmentation predictions are shown for better illustration.
Related papers
- MDFI-Net: Multiscale Differential Feature Interaction Network for Accurate Retinal Vessel Segmentation [3.152646316470194]
This paper proposes a feature-enhanced interaction network based on DPCN, named MDFI-Net.
The proposed MDFI-Net achieves segmentation performance superior to state-of-the-art methods on public datasets.
arXiv Detail & Related papers (2024-10-20T16:42:22Z) - Perspectives: Comparison of Deep Learning Segmentation Models on Biophysical and Biomedical Data [0.0]
We compare convolutional neural networks, U-Nets, vision transformers, and vision state space models.
In doing so, we establish criteria for determining optimal conditions under which each model excels.
arXiv Detail & Related papers (2024-08-14T19:49:19Z) - Effective Subset Selection Through The Lens of Neural Network Pruning [31.43307762723943]
It is important to select the data to be annotated wisely, which is known as the subset selection problem.
We investigate the relationship between subset selection and neural network pruning, which is more widely studied.
We propose utilizing the norm criterion of neural network features to improve subset selection methods.
arXiv Detail & Related papers (2024-06-03T08:12:32Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Weight Predictor Network with Feature Selection for Small Sample Tabular
Biomedical Data [7.923088041693465]
We propose Weight Predictor Network with Feature Selection for learning neural networks from high-dimensional and small sample data.
We evaluate on nine real-world biomedical datasets and demonstrate that WPFS outperforms other standard as well as more recent methods.
arXiv Detail & Related papers (2022-11-28T18:17:10Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - When are Deep Networks really better than Random Forests at small sample
sizes? [2.5556070792288934]
Random forests (RF) and deep networks (DN) are two of the most popular machine learning methods in the current scientific literature.
We wish to further explore and establish the conditions and domains in which each approach excels.
Our focus is on datasets with at most 10,000 samples, which represent a large fraction of scientific and biomedical datasets.
arXiv Detail & Related papers (2021-08-31T06:33:17Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z) - Analyzing Neural Networks Based on Random Graphs [77.34726150561087]
We perform a massive evaluation of neural networks with architectures corresponding to random graphs of various types.
We find that none of the classical numerical graph invariants by itself allows to single out the best networks.
We also find that networks with primarily short-range connections perform better than networks which allow for many long-range connections.
arXiv Detail & Related papers (2020-02-19T11:04:49Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.