Sperm Detection and Tracking in Phase-Contrast Microscopy Image
Sequences using Deep Learning and Modified CSR-DCF
- URL: http://arxiv.org/abs/2002.04034v4
- Date: Sat, 4 Apr 2020 06:21:26 GMT
- Title: Sperm Detection and Tracking in Phase-Contrast Microscopy Image
Sequences using Deep Learning and Modified CSR-DCF
- Authors: Mohammad reza Mohammadi, Mohammad Rahimzadeh and Abolfazl Attar
- Abstract summary: In this article, we use RetinaNet, a deep fully convolutional neural network as the object detector.
The average precision of the detection phase is 99.1%, and the F1 score of the tracking method is 96.61%.
These results can be a great help in studies investigating sperm behavior and analyzing fertility possibility.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, computer-aided sperm analysis (CASA) systems have made a big leap
in extracting the characteristics of spermatozoa for studies or measuring human
fertility. The first step in sperm characteristics analysis is sperm detection
in the frames of the video sample. In this article, we used RetinaNet, a deep
fully convolutional neural network as the object detector. Sperms are small
objects with few attributes, that makes the detection more difficult in
high-density samples and especially when there are other particles in semen,
which could be like sperm heads. One of the main attributes of sperms is their
movement, but this attribute cannot be extracted when only one frame would be
fed to the network. To improve the performance of the sperm detection network,
we concatenated some consecutive frames to use as the input of the network.
With this method, the motility attribute has also been extracted, and then with
the help of the deep convolutional network, we have achieved high accuracy in
sperm detection. The second step is tracking the sperms, for extracting the
motility parameters that are essential for indicating fertility and other
studies on sperms. In the tracking phase, we modify the CSR-DCF algorithm. This
method also has shown excellent results in sperm tracking even in high-density
sperm samples, occlusions, sperm colliding, and when sperms exit from a frame
and re-enter in the next frames. The average precision of the detection phase
is 99.1%, and the F1 score of the tracking method evaluation is 96.61%. These
results can be a great help in studies investigating sperm behavior and
analyzing fertility possibility.
Related papers
- Automated Sperm Assessment Framework and Neural Network Specialized for
Sperm Video Recognition [0.7499722271664147]
Infertility is a global health problem, and an increasing number of couples are seeking medical assistance to achieve reproduction.
Previous sperm assessment studies with deep learning have used datasets comprising only sperm heads.
We constructed a video dataset for sperm assessment whose videos include sperm head as well as neck and tail, and its labels were annotated with soft-label.
arXiv Detail & Related papers (2023-11-10T08:23:24Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - ACTIVE: A Deep Model for Sperm and Impurity Detection in Microscopic
Videos [17.3840418564686]
We introduce a deep learning model based on Double Branch Feature Extraction Network (DBFEN) and Cross-conjugate Feature Pyramid Networks (CCFPN)
Experiments show that the highest AP50 of the sperm and impurity detection is 91.13% and 59.64%, which lead its competitors by a substantial margin and establish new state-of-the-art results in this problem.
arXiv Detail & Related papers (2023-01-15T02:24:17Z) - VISEM-Tracking, a human spermatozoa tracking dataset [3.1673957150053713]
We provide a dataset called VISEM-Tracking with 20 video recordings of 30 seconds (comprising 29,196 frames) of wet sperm preparations.
We present baseline sperm detection performances using the YOLOv5 deep learning (DL) model trained on the VISEM-Tracking dataset.
arXiv Detail & Related papers (2022-12-06T09:25:52Z) - TOD-CNN: An Effective Convolutional Neural Network for Tiny Object
Detection in Sperm Videos [17.739265119524244]
We present a convolutional neural network for tiny object detection (TOD-CNN) with an underlying data set of high-quality sperm microscopic videos.
To demonstrate the importance of sperm detection technology in sperm quality analysis, we carry out relevant sperm quality evaluation metrics and compare them with the diagnosis results from medical doctors.
arXiv Detail & Related papers (2022-04-18T05:14:27Z) - End-to-End Zero-Shot HOI Detection via Vision and Language Knowledge
Distillation [86.41437210485932]
We aim at advancing zero-shot HOI detection to detect both seen and unseen HOIs simultaneously.
We propose a novel end-to-end zero-shot HOI Detection framework via vision-language knowledge distillation.
Our method outperforms the previous SOTA by 8.92% on unseen mAP and 10.18% on overall mAP.
arXiv Detail & Related papers (2022-04-01T07:27:19Z) - A Survey of Semen Quality Evaluation in Microscopic Videos Using
Computer Assisted Sperm Analysis [14.07532901052797]
The Computer Assisted Sperm Analysis (CASA) plays a crucial role in male reproductive health diagnosis and Infertility treatment.
The various works related to Computer Assisted Sperm Analysis methods in the last 30 years (since 1988) are comprehensively introduced and analysed in this survey.
arXiv Detail & Related papers (2022-02-16T01:50:58Z) - Benchmarking Deep Models for Salient Object Detection [67.07247772280212]
We construct a general SALient Object Detection (SALOD) benchmark to conduct a comprehensive comparison among several representative SOD methods.
In the above experiments, we find that existing loss functions usually specialized in some metrics but reported inferior results on the others.
We propose a novel Edge-Aware (EA) loss that promotes deep networks to learn more discriminative features by integrating both pixel- and image-level supervision signals.
arXiv Detail & Related papers (2022-02-07T03:43:16Z) - SCOP: Scientific Control for Reliable Neural Network Pruning [127.20073865874636]
This paper proposes a reliable neural network pruning algorithm by setting up a scientific control.
Redundant filters can be discovered in the adversarial process of different features.
Our method can reduce 57.8% parameters and 60.2% FLOPs of ResNet-101 with only 0.01% top-1 accuracy loss on ImageNet.
arXiv Detail & Related papers (2020-10-21T03:02:01Z) - Learning a Unified Sample Weighting Network for Object Detection [113.98404690619982]
Region sampling or weighting is significantly important to the success of modern region-based object detectors.
We argue that sample weighting should be data-dependent and task-dependent.
We propose a unified sample weighting network to predict a sample's task weights.
arXiv Detail & Related papers (2020-06-11T16:19:16Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.