FU-net: Multi-class Image Segmentation Using Feedback Weighted U-net
- URL: http://arxiv.org/abs/2004.13470v1
- Date: Tue, 28 Apr 2020 13:08:14 GMT
- Title: FU-net: Multi-class Image Segmentation Using Feedback Weighted U-net
- Authors: Mina Jafari, Ruizhe Li, Yue Xing, Dorothee Auer, Susan Francis,
Jonathan Garibaldi, and Xin Chen
- Abstract summary: We present a generic deep convolutional neural network (DCNN) for multi-class image segmentation.
It is based on a well-established supervised end-to-end DCNN model, known as U-net.
- Score: 5.193724835939252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a generic deep convolutional neural network (DCNN)
for multi-class image segmentation. It is based on a well-established
supervised end-to-end DCNN model, known as U-net. U-net is firstly modified by
adding widely used batch normalization and residual block (named as BRU-net) to
improve the efficiency of model training. Based on BRU-net, we further
introduce a dynamically weighted cross-entropy loss function. The weighting
scheme is calculated based on the pixel-wise prediction accuracy during the
training process. Assigning higher weights to pixels with lower segmentation
accuracies enables the network to learn more from poorly predicted image
regions. Our method is named as feedback weighted U-net (FU-net). We have
evaluated our method based on T1- weighted brain MRI for the segmentation of
midbrain and substantia nigra, where the number of pixels in each class is
extremely unbalanced to each other. Based on the dice coefficient measurement,
our proposed FU-net has outperformed BRU-net and U-net with statistical
significance, especially when only a small number of training examples are
available. The code is publicly available in GitHub (GitHub link:
https://github.com/MinaJf/FU-net).
Related papers
- Efficient Training with Denoised Neural Weights [65.14892033932895]
This work takes a novel step towards building a weight generator to synthesize the neural weights for initialization.
We use the image-to-image translation task with generative adversarial networks (GANs) as an example due to the ease of collecting model weights.
By initializing the image translation model with the denoised weights predicted by our diffusion model, the training requires only 43.3 seconds.
arXiv Detail & Related papers (2024-07-16T17:59:42Z) - Random Weights Networks Work as Loss Prior Constraint for Image
Restoration [50.80507007507757]
We present our belief Random Weights Networks can be Acted as Loss Prior Constraint for Image Restoration''
Our belief can be directly inserted into existing networks without any training and testing computational cost.
To emphasize, our main focus is to spark the realms of loss function and save their current neglected status.
arXiv Detail & Related papers (2023-03-29T03:43:51Z) - Co-training $2^L$ Submodels for Visual Recognition [67.02999567435626]
Submodel co-training is a regularization method related to co-training, self-distillation and depth.
We show that submodel co-training is effective to train backbones for recognition tasks such as image classification and semantic segmentation.
arXiv Detail & Related papers (2022-12-09T14:38:09Z) - Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training
of Image Segmentation Models [54.49581189337848]
We propose a method to enable the end-to-end pre-training for image segmentation models based on classification datasets.
The proposed method leverages a weighted segmentation learning procedure to pre-train the segmentation network en masse.
Experiment results show that, with ImageNet accompanied by PSSL as the source dataset, the proposed end-to-end pre-training strategy successfully boosts the performance of various segmentation models.
arXiv Detail & Related papers (2022-07-04T13:02:32Z) - Point-Cloud Deep Learning of Porous Media for Permeability Prediction [0.0]
We propose a novel deep learning framework for predicting permeability of porous media from their digital images.
We model the boundary between solid matrix and pore spaces as point clouds and feed them as inputs to a neural network based on the PointNet architecture.
arXiv Detail & Related papers (2021-07-18T22:59:21Z) - ReCU: Reviving the Dead Weights in Binary Neural Networks [153.6789340484509]
We explore the influence of "dead weights" which refer to a group of weights that are barely updated during the training of BNNs.
We prove that reviving the "dead weights" by ReCU can result in a smaller quantization error.
Our method offers not only faster BNN training, but also state-of-the-art performance on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2021-03-23T08:11:20Z) - Convolution-Free Medical Image Segmentation using Transformers [8.130670465411239]
We show that a different method, based entirely on self-attention between neighboring image patches, can achieve competitive or better results.
We show that the proposed model can achieve segmentation accuracies that are better than the state of the art CNNs on three datasets.
arXiv Detail & Related papers (2021-02-26T18:49:13Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - Increasing the Robustness of Semantic Segmentation Models with
Painting-by-Numbers [39.95214171175713]
We build upon an insight from image classification that output can be improved by increasing the network-bias towards object shapes.
Our basic idea is to alpha-blend a portion of the RGB training images with faked images, where each class-label is given a fixed, randomly chosen color.
We demonstrate the effectiveness of our training schema for DeepLabv3+ with various network backbones, MobileNet-V2, ResNets, and Xception, and evaluate it on the Cityscapes dataset.
arXiv Detail & Related papers (2020-10-12T07:42:39Z) - Rethinking CNN Models for Audio Classification [20.182928938110923]
ImageNet-Pretrained standard deep CNN models can be used as strong baseline networks for audio classification.
We systematically study how much of pretrained weights is useful for learning spectrograms.
We show that for a given standard model using pretrained weights is better than using randomly Dense weights.
arXiv Detail & Related papers (2020-07-22T01:31:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.