Automated Sperm Assessment Framework and Neural Network Specialized for
Sperm Video Recognition
- URL: http://arxiv.org/abs/2311.05927v2
- Date: Mon, 13 Nov 2023 01:56:27 GMT
- Title: Automated Sperm Assessment Framework and Neural Network Specialized for
Sperm Video Recognition
- Authors: Takuro Fujii, Hayato Nakagawa, Teppei Takeshima, Yasushi Yumura,
Tomoki Hamagami
- Abstract summary: Infertility is a global health problem, and an increasing number of couples are seeking medical assistance to achieve reproduction.
Previous sperm assessment studies with deep learning have used datasets comprising only sperm heads.
We constructed a video dataset for sperm assessment whose videos include sperm head as well as neck and tail, and its labels were annotated with soft-label.
- Score: 0.7499722271664147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Infertility is a global health problem, and an increasing number of couples
are seeking medical assistance to achieve reproduction, at least half of which
are caused by men. The success rate of assisted reproductive technologies
depends on sperm assessment, in which experts determine whether sperm can be
used for reproduction based on morphology and motility of sperm. Previous sperm
assessment studies with deep learning have used datasets comprising images that
include only sperm heads, which cannot consider motility and other morphologies
of sperm. Furthermore, the labels of the dataset are one-hot, which provides
insufficient support for experts, because assessment results are inconsistent
between experts, and they have no absolute answer. Therefore, we constructed
the video dataset for sperm assessment whose videos include sperm head as well
as neck and tail, and its labels were annotated with soft-label. Furthermore,
we proposed the sperm assessment framework and the neural network, RoSTFine,
for sperm video recognition. Experimental results showed that RoSTFine could
improve the sperm assessment performances compared to existing video
recognition models and focus strongly on important sperm parts (i.e., head and
neck).
Related papers
- Predicting DNA fragmentation: A non-destructive analogue to chemical assays using machine learning [0.0]
Global infertility rates are increasing, with 2.5% of all births being assisted by in vitro fertilisation (IVF) in 2022.
The assessment of sperm DNA is traditionally done through chemical assays which render sperm cells ineligible for IVF.
With the advent of state-of-the-art machine learning and its exceptional performance in many sectors, this work builds on these successes.
Rendering a predictive model which preserves sperm integrity and allows for optimal selection of sperm for IVF.
arXiv Detail & Related papers (2024-09-20T08:04:12Z) - CS3: Cascade SAM for Sperm Segmentation [31.108179290836848]
We present the Cascade SAM for Sperm (CS3), an unsupervised approach specifically designed to tackle the issue of sperm overlap.
In collaboration with leading medical institutions, we have compiled a dataset comprising approximately 2,000 unlabeled sperm images.
Experimental results demonstrate superior performance of CS3 compared to existing methods.
arXiv Detail & Related papers (2024-07-04T09:32:34Z) - Missing Information, Unresponsive Authors, Experimental Flaws: The
Impossibility of Assessing the Reproducibility of Previous Human Evaluations
in NLP [84.08476873280644]
Just 13% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction.
As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach.
arXiv Detail & Related papers (2023-05-02T17:46:12Z) - VISEM-Tracking, a human spermatozoa tracking dataset [3.1673957150053713]
We provide a dataset called VISEM-Tracking with 20 video recordings of 30 seconds (comprising 29,196 frames) of wet sperm preparations.
We present baseline sperm detection performances using the YOLOv5 deep learning (DL) model trained on the VISEM-Tracking dataset.
arXiv Detail & Related papers (2022-12-06T09:25:52Z) - Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object
Interactions [138.49522643425334]
Bongard-HOI is a new visual reasoning benchmark that focuses on compositional learning of human-object interactions from natural images.
It is inspired by two desirable characteristics from the classical Bongard problems (BPs): 1) few-shot concept learning, and 2) context-dependent reasoning.
Bongard-HOI presents a substantial challenge to today's visual recognition models.
arXiv Detail & Related papers (2022-05-27T07:36:29Z) - Improving Human Sperm Head Morphology Classification with Unsupervised
Anatomical Feature Distillation [3.666202958045386]
Recent deep learning (DL) morphology analysis methods achieve promising benchmark results, but leave performance and robustness on the table.
We introduce a new DL training framework that leverages anatomical and image priors from human sperm microscopy crops to extract useful features without additional labeling cost.
We evaluate our new approach on two public sperm datasets and achieve state-of-the-art performances.
arXiv Detail & Related papers (2022-02-15T04:58:29Z) - What Is Considered Complete for Visual Recognition? [110.43159801737222]
We advocate for a new type of pre-training task named learning-by-compression.
The computational models are optimized to represent the visual data using compact features.
Semantic annotations, when available, play the role of weak supervision.
arXiv Detail & Related papers (2021-05-28T16:59:14Z) - Semi-Automatic Data Annotation guided by Feature Space Projection [117.9296191012968]
We present a semi-automatic data annotation approach based on suitable feature space projection and semi-supervised label estimation.
We validate our method on the popular MNIST dataset and on images of human intestinal parasites with and without fecal impurities.
Our results demonstrate the added-value of visual analytics tools that combine complementary abilities of humans and machines for more effective machine learning.
arXiv Detail & Related papers (2020-07-27T17:03:50Z) - Mitigating Gender Bias in Captioning Systems [56.25457065032423]
Most captioning models learn gender bias, leading to high gender prediction errors, especially for women.
We propose a new Guided Attention Image Captioning model (GAIC) which provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence.
arXiv Detail & Related papers (2020-06-15T12:16:19Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z) - Sperm Detection and Tracking in Phase-Contrast Microscopy Image
Sequences using Deep Learning and Modified CSR-DCF [0.0]
In this article, we use RetinaNet, a deep fully convolutional neural network as the object detector.
The average precision of the detection phase is 99.1%, and the F1 score of the tracking method is 96.61%.
These results can be a great help in studies investigating sperm behavior and analyzing fertility possibility.
arXiv Detail & Related papers (2020-02-11T00:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.