FibeR-CNN: Expanding Mask R-CNN to Improve Image-Based Fiber Analysis
- URL: http://arxiv.org/abs/2006.04552v2
- Date: Mon, 19 Oct 2020 08:45:14 GMT
- Title: FibeR-CNN: Expanding Mask R-CNN to Improve Image-Based Fiber Analysis
- Authors: Max Frei, Frank Einar Kruis
- Abstract summary: We propose the use of region-based convolutional neural networks (R-CNNs) to automate image-based analysis of fibers.
FibeR-CNN is able to surpass the mean average precision of Mask R-CNN by 33 % on a novel test data set of fiber images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fiber-shaped materials (e.g. carbon nano tubes) are of great relevance, due
to their unique properties but also the health risk they can impose.
Unfortunately, image-based analysis of fibers still involves manual annotation,
which is a time-consuming and costly process. We therefore propose the use of
region-based convolutional neural networks (R-CNNs) to automate this task. Mask
R-CNN, the most widely used R-CNN for semantic segmentation tasks, is prone to
errors when it comes to the analysis of fiber-shaped objects. Hence, a new
architecture - FibeR-CNN - is introduced and validated. FibeR-CNN combines two
established R-CNN architectures (Mask and Keypoint R-CNN) and adds additional
network heads for the prediction of fiber widths and lengths. As a result,
FibeR-CNN is able to surpass the mean average precision of Mask R-CNN by 33 %
(11 percentage points) on a novel test data set of fiber images.
Related papers
- High-Resolution Convolutional Neural Networks on Homomorphically
Encrypted Data via Sharding Ciphertexts [0.08999666725996974]
We extend methods for evaluating DCNNs on images with larger dimensions and many channels, beyond what can be stored in single ciphertexts.
We show how existing DCNN models can be regularized during the training process to further improve efficiency and accuracy.
These techniques are applied to homomorphically evaluate a DCNN with high accuracy on the high-resolution ImageNet dataset, achieving $80.2%$ top-1 accuracy.
arXiv Detail & Related papers (2023-06-15T15:16:16Z) - HyPHEN: A Hybrid Packing Method and Optimizations for Homomorphic
Encryption-Based Neural Networks [7.642103082787977]
Convolutional neural network (CNN) inference using fully homomorphic encryption (FHE) is a promising private inference (PI) solution.
We present HyPHEN, a deep HCNN construction that incorporates novel convolution algorithms and data packing methods.
As a result, HyPHEN brings the latency of HCNN CIFAR-10 inference down to a practical level at 1.4 seconds (ResNet-20) and demonstrates HCNN ImageNet inference for the first time at 14.7 seconds (ResNet-18).
arXiv Detail & Related papers (2023-02-05T15:36:51Z) - A heterogeneous group CNN for image super-resolution [127.2132400582117]
Convolutional neural networks (CNNs) have obtained remarkable performance via deep architectures.
We present a heterogeneous group SR CNN (HGSRCNN) via leveraging structure information of different types to obtain a high-quality image.
arXiv Detail & Related papers (2022-09-26T04:14:59Z) - Towards a General Purpose CNN for Long Range Dependencies in
$\mathrm{N}$D [49.57261544331683]
We propose a single CNN architecture equipped with continuous convolutional kernels for tasks on arbitrary resolution, dimensionality and length without structural changes.
We show the generality of our approach by applying the same CCNN to a wide set of tasks on sequential (1$mathrmD$) and visual data (2$mathrmD$)
Our CCNN performs competitively and often outperforms the current state-of-the-art across all tasks considered.
arXiv Detail & Related papers (2022-06-07T15:48:02Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Strengthening the Training of Convolutional Neural Networks By Using
Walsh Matrix [0.0]
We have modified the training and structure of DNN to increase the classification performance.
A minimum distance network (MDN) following the last layer of the convolutional neural network (CNN) is used as the classifier.
In different areas, it has been observed that a higher classification performance was obtained by using the DivFE with less number of nodes.
arXiv Detail & Related papers (2021-03-31T18:06:11Z) - Explore the Knowledge contained in Network Weights to Obtain Sparse
Neural Networks [2.649890751459017]
This paper proposes a novel learning approach to obtain sparse fully connected layers in neural networks (NNs) automatically.
We design a switcher neural network (SNN) to optimize the structure of the task neural network (TNN)
arXiv Detail & Related papers (2021-03-26T11:29:40Z) - Boundary-preserving Mask R-CNN [38.15409855290749]
We propose a conceptually simple yet effective Boundary-preserving Mask R-CNN (BMask R-CNN) to leverage object boundary information to improve mask localization accuracy.
BMask R-CNN contains a boundary-preserving mask head in which object boundary and mask are mutually learned via feature fusion blocks.
Without bells and whistles, BMask R-CNN outperforms Mask R-CNN by a considerable margin on the COCO dataset.
arXiv Detail & Related papers (2020-07-17T11:54:02Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.