Modular Neural Network Approaches for Surgical Image Recognition
- URL: http://arxiv.org/abs/2307.08880v1
- Date: Mon, 17 Jul 2023 22:28:16 GMT
- Title: Modular Neural Network Approaches for Surgical Image Recognition
- Authors: Nosseiba Ben Salem, Younes Bennani, Joseph Karkazan, Abir Barbara,
Charles Dacheux, Thomas Gregory
- Abstract summary: We introduce and evaluate different architectures of modular learning for Dorsal Capsulo-Scapholunate Septum (DCSS) instability classification.
Our experiments have shown that modular learning improves performances compared to non-modular systems.
In the second part, we present our approach for data labeling and segmentation with self-training applied on shoulder arthroscopy images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep learning-based applications have seen a lot of success in recent years.
Text, audio, image, and video have all been explored with great success using
deep learning approaches. The use of convolutional neural networks (CNN) in
computer vision, in particular, has yielded reliable results. In order to
achieve these results, a large amount of data is required. However, the dataset
cannot always be accessible. Moreover, annotating data can be difficult and
time-consuming. Self-training is a semi-supervised approach that managed to
alleviate this problem and achieve state-of-the-art performances. Theoretical
analysis even proved that it may result in a better generalization than a
normal classifier. Another problem neural networks can face is the increasing
complexity of modern problems, requiring a high computational and storage cost.
One way to mitigate this issue, a strategy that has been inspired by human
cognition known as modular learning, can be employed. The principle of the
approach is to decompose a complex problem into simpler sub-tasks. This
approach has several advantages, including faster learning, better
generalization, and enables interpretability.
In the first part of this paper, we introduce and evaluate different
architectures of modular learning for Dorsal Capsulo-Scapholunate Septum (DCSS)
instability classification. Our experiments have shown that modular learning
improves performances compared to non-modular systems. Moreover, we found that
weighted modular, that is to weight the output using the probabilities from the
gating module, achieved an almost perfect classification. In the second part,
we present our approach for data labeling and segmentation with self-training
applied on shoulder arthroscopy images.
Related papers
- Homological Convolutional Neural Networks [4.615338063719135]
We propose a novel deep learning architecture that exploits the data structural organization through topologically constrained network representations.
We test our model on 18 benchmark datasets against 5 classic machine learning and 3 deep learning models.
arXiv Detail & Related papers (2023-08-26T08:48:51Z) - Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Neural Routing in Meta Learning [9.070747377130472]
We aim to improve the model performance of the current meta learning algorithms by selectively using only parts of the model conditioned on the input tasks.
In this work, we describe an approach that investigates task-dependent dynamic neuron selection in deep convolutional neural networks (CNNs) by leveraging the scaling factor in the batch normalization layer.
We find that the proposed approach, neural routing in meta learning (NRML), outperforms one of the well-known existing meta learning baselines on few-shot classification tasks.
arXiv Detail & Related papers (2022-10-14T16:31:24Z) - EVC-Net: Multi-scale V-Net with Conditional Random Fields for Brain
Extraction [3.4376560669160394]
EVC-Net adds lower scale inputs on each encoder block.
Conditional Random Fields are re-introduced here as an additional step for refining the network's output.
Results show that even with limited training resources, EVC-Net achieves higher Dice Coefficient and Jaccard Index.
arXiv Detail & Related papers (2022-06-06T18:21:21Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers [84.57980167400513]
Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
arXiv Detail & Related papers (2020-10-15T20:43:17Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Biologically-Motivated Deep Learning Method using Hierarchical
Competitive Learning [0.0]
I propose to introduce unsupervised competitive learning which only requires forward propagating signals as a pre-training method for CNNs.
The proposed method could be useful for a variety of poorly labeled data, for example, time series or medical data.
arXiv Detail & Related papers (2020-01-04T20:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.