VISEM-Tracking, a human spermatozoa tracking dataset
- URL: http://arxiv.org/abs/2212.02842v5
- Date: Wed, 10 May 2023 07:10:31 GMT
- Title: VISEM-Tracking, a human spermatozoa tracking dataset
- Authors: Vajira Thambawita, Steven A. Hicks, Andrea M. Stor{\aa}s, Thu Nguyen,
Jorunn M. Andersen, Oliwia Witczak, Trine B. Haugen, Hugo L. Hammer, P{\aa}l
Halvorsen, Michael A. Riegler
- Abstract summary: We provide a dataset called VISEM-Tracking with 20 video recordings of 30 seconds (comprising 29,196 frames) of wet sperm preparations.
We present baseline sperm detection performances using the YOLOv5 deep learning (DL) model trained on the VISEM-Tracking dataset.
- Score: 3.1673957150053713
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A manual assessment of sperm motility requires microscopy observation, which
is challenging due to the fast-moving spermatozoa in the field of view. To
obtain correct results, manual evaluation requires extensive training.
Therefore, computer-assisted sperm analysis (CASA) has become increasingly used
in clinics. Despite this, more data is needed to train supervised machine
learning approaches in order to improve accuracy and reliability in the
assessment of sperm motility and kinematics. In this regard, we provide a
dataset called VISEM-Tracking with 20 video recordings of 30 seconds
(comprising 29,196 frames) of wet sperm preparations with manually annotated
bounding-box coordinates and a set of sperm characteristics analyzed by experts
in the domain. In addition to the annotated data, we provide unlabeled video
clips for easy-to-use access and analysis of the data via methods such as self-
or unsupervised learning. As part of this paper, we present baseline sperm
detection performances using the YOLOv5 deep learning (DL) model trained on the
VISEM-Tracking dataset. As a result, we show that the dataset can be used to
train complex DL models to analyze spermatozoa.
Related papers
- Self-supervised Representation Learning for Cell Event Recognition through Time Arrow Prediction [23.611375087515963]
Deep-learning or segmentation tracking methods rely on large amount of high quality annotations to work effectively.
In this work, we explore an alternative solution: using feature annotations from self-supervised representation learning (SSRL) for the downstream task of cell event recognition.
Our analysis also provides insight into applications of the SSRL using TAP in live-cell microscopy.
arXiv Detail & Related papers (2024-11-06T13:54:26Z) - DACO: Towards Application-Driven and Comprehensive Data Analysis via Code Generation [83.30006900263744]
Data analysis is a crucial analytical process to generate in-depth studies and conclusive insights.
We propose to automatically generate high-quality answer annotations leveraging the code-generation capabilities of LLMs.
Our DACO-RL algorithm is evaluated by human annotators to produce more helpful answers than SFT model in 57.72% cases.
arXiv Detail & Related papers (2024-03-04T22:47:58Z) - Automated Sperm Assessment Framework and Neural Network Specialized for
Sperm Video Recognition [0.7499722271664147]
Infertility is a global health problem, and an increasing number of couples are seeking medical assistance to achieve reproduction.
Previous sperm assessment studies with deep learning have used datasets comprising only sperm heads.
We constructed a video dataset for sperm assessment whose videos include sperm head as well as neck and tail, and its labels were annotated with soft-label.
arXiv Detail & Related papers (2023-11-10T08:23:24Z) - A Novel Dataset for Evaluating and Alleviating Domain Shift for Human
Detection in Agricultural Fields [59.035813796601055]
We evaluate the impact of domain shift on human detection models trained on well known object detection datasets when deployed on data outside the distribution of the training set.
We introduce the OpenDR Humans in Field dataset, collected in the context of agricultural robotics applications, using the Robotti platform.
arXiv Detail & Related papers (2022-09-27T07:04:28Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - A Machine Learning Framework for Automatic Prediction of Human Semen
Motility [7.167550590287388]
Several regression models are trained to automatically predict the percentage (0 to 100) of progressive, non-progressive, and immotile spermatozoa in a given sample.
Four machine learning models, including linear Support Vector Regressor (SVR), Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN)
Best results for predicting motility are achieved by using the Crocker-Grier algorithm to track sperm cells in an unsupervised way.
arXiv Detail & Related papers (2021-09-16T15:26:40Z) - Visual Distant Supervision for Scene Graph Generation [66.10579690929623]
Scene graph models usually require supervised learning on large quantities of labeled data with intensive human annotation.
We propose visual distant supervision, a novel paradigm of visual relation learning, which can train scene graph models without any human-labeled data.
Comprehensive experimental results show that our distantly supervised model outperforms strong weakly supervised and semi-supervised baselines.
arXiv Detail & Related papers (2021-03-29T06:35:24Z) - Predicting Semen Motility using three-dimensional Convolutional Neural
Networks [0.0]
We propose an improved deep learning based approach using three-dimensional convolutional neural networks to predict sperm motility from microscopic videos of the semen sample.
Our models indicate that deep learning based automatic semen analysis may become a valuable and effective tool in fertility and IVF labs.
arXiv Detail & Related papers (2021-01-08T07:38:52Z) - Semi-Automatic Data Annotation guided by Feature Space Projection [117.9296191012968]
We present a semi-automatic data annotation approach based on suitable feature space projection and semi-supervised label estimation.
We validate our method on the popular MNIST dataset and on images of human intestinal parasites with and without fecal impurities.
Our results demonstrate the added-value of visual analytics tools that combine complementary abilities of humans and machines for more effective machine learning.
arXiv Detail & Related papers (2020-07-27T17:03:50Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z) - Sperm Detection and Tracking in Phase-Contrast Microscopy Image
Sequences using Deep Learning and Modified CSR-DCF [0.0]
In this article, we use RetinaNet, a deep fully convolutional neural network as the object detector.
The average precision of the detection phase is 99.1%, and the F1 score of the tracking method is 96.61%.
These results can be a great help in studies investigating sperm behavior and analyzing fertility possibility.
arXiv Detail & Related papers (2020-02-11T00:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.