[Citation needed] Data usage and citation practices in medical imaging
conferences
- URL: http://arxiv.org/abs/2402.03003v1
- Date: Mon, 5 Feb 2024 13:41:22 GMT
- Title: [Citation needed] Data usage and citation practices in medical imaging
conferences
- Authors: Th\'eo Sourget, Ahmet Akko\c{c}, Stinna Winther, Christine Lyngbye
Galsgaard, Amelia Jim\'enez-S\'anchez, Dovile Juodelyte, Caroline Petitjean,
Veronika Cheplygina
- Abstract summary: We present two open-source tools that could help with the detection of dataset usage.
We studied the usage of 20 publicly available medical datasets in papers from MICCAI and MIDL.
Our findings demonstrate the concentration of the usage of a limited set of datasets.
- Score: 2.0551097461599297
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Medical imaging papers often focus on methodology, but the quality of the
algorithms and the validity of the conclusions are highly dependent on the
datasets used. As creating datasets requires a lot of effort, researchers often
use publicly available datasets, there is however no adopted standard for
citing the datasets used in scientific papers, leading to difficulty in
tracking dataset usage. In this work, we present two open-source tools we
created that could help with the detection of dataset usage, a pipeline
\url{https://github.com/TheoSourget/Public_Medical_Datasets_References} using
OpenAlex and full-text analysis, and a PDF annotation software
\url{https://github.com/TheoSourget/pdf_annotator} used in our study to
manually label the presence of datasets. We applied both tools on a study of
the usage of 20 publicly available medical datasets in papers from MICCAI and
MIDL. We compute the proportion and the evolution between 2013 and 2023 of 3
types of presence in a paper: cited, mentioned in the full text, cited and
mentioned. Our findings demonstrate the concentration of the usage of a limited
set of datasets. We also highlight different citing practices, making the
automation of tracking difficult.
Related papers
- Using Large Language Models to Enrich the Documentation of Datasets for Machine Learning [1.8270184406083445]
We explore using large language models (LLM) and prompting strategies to automatically extract dimensions from documents.
Our approach could aid data publishers and practitioners in creating machine-readable documentation.
We have released an open-source tool implementing our approach and a replication package, including the experiments' code and results.
arXiv Detail & Related papers (2024-04-04T10:09:28Z) - Copycats: the many lives of a publicly available medical imaging dataset [12.98380178359767]
Medical Imaging (MI) datasets are fundamental to artificial intelligence in healthcare.
MI datasets used to be proprietary, but have become increasingly available to the public, including on community-contributed platforms (CCPs) like Kaggle or HuggingFace.
While open data is important to enhance the redistribution of data's public value, we find that the current CCP governance model fails to uphold the quality needed and recommended practices for sharing, documenting, and evaluating datasets.
arXiv Detail & Related papers (2024-02-09T12:01:22Z) - A large dataset curation and benchmark for drug target interaction [0.7699646945563469]
Bioactivity data plays a key role in drug discovery and repurposing.
We propose a way to standardize and represent efficiently a very large dataset curated from multiple public sources.
arXiv Detail & Related papers (2024-01-30T17:06:25Z) - Interactive Distillation of Large Single-Topic Corpora of Scientific
Papers [1.2954493726326113]
A more robust but time-consuming approach is to build the dataset constructively in which a subject matter expert handpicks documents.
Here we showcase a new tool, based on machine learning, for constructively generating targeted datasets of scientific literature.
arXiv Detail & Related papers (2023-09-19T17:18:36Z) - Replication: Contrastive Learning and Data Augmentation in Traffic
Classification Using a Flowpic Input Representation [47.95762911696397]
We reproduce [16] on the same datasets and replicate its most salient aspect (the importance of data augmentation) on three additional public datasets.
While we confirm most of the original results, we also found a 20% accuracy drop on some of the investigated scenarios due to a data shift in the original dataset.
arXiv Detail & Related papers (2023-09-18T12:55:09Z) - infoVerse: A Universal Framework for Dataset Characterization with
Multidimensional Meta-information [68.76707843019886]
infoVerse is a universal framework for dataset characterization.
infoVerse captures multidimensional characteristics of datasets by incorporating various model-driven meta-information.
In three real-world applications (data pruning, active learning, and data annotation), the samples chosen on infoVerse space consistently outperform strong baselines.
arXiv Detail & Related papers (2023-05-30T18:12:48Z) - Going beyond research datasets: Novel intent discovery in the industry
setting [60.90117614762879]
This paper proposes methods to improve the intent discovery pipeline deployed in a large e-commerce platform.
We show the benefit of pre-training language models on in-domain data: both self-supervised and with weak supervision.
We also devise the best method to utilize the conversational structure (i.e., question and answer) of real-life datasets during fine-tuning for clustering tasks, which we call Conv.
arXiv Detail & Related papers (2023-05-09T14:21:29Z) - Simple multi-dataset detection [83.9604523643406]
We present a simple method for training a unified detector on multiple large-scale datasets.
We show how to automatically integrate dataset-specific outputs into a common semantic taxonomy.
Our approach does not require manual taxonomy reconciliation.
arXiv Detail & Related papers (2021-02-25T18:55:58Z) - Partially-Aligned Data-to-Text Generation with Distant Supervision [69.15410325679635]
We propose a new generation task called Partially-Aligned Data-to-Text Generation (PADTG)
It is more practical since it utilizes automatically annotated data for training and thus considerably expands the application domains.
Our framework outperforms all baseline models as well as verify the feasibility of utilizing partially-aligned data.
arXiv Detail & Related papers (2020-10-03T03:18:52Z) - Machine Identification of High Impact Research through Text and Image
Analysis [0.4737991126491218]
We present a system to automatically separate papers with a high from those with a low likelihood of gaining citations.
Our system uses both a visual classifier, useful for surmising a document's overall appearance, and a text classifier, for making content-informed decisions.
arXiv Detail & Related papers (2020-05-20T19:12:24Z) - Open Graph Benchmark: Datasets for Machine Learning on Graphs [86.96887552203479]
We present the Open Graph Benchmark (OGB) to facilitate scalable, robust, and reproducible graph machine learning (ML) research.
OGB datasets are large-scale, encompass multiple important graph ML tasks, and cover a diverse range of domains.
For each dataset, we provide a unified evaluation protocol using meaningful application-specific data splits and evaluation metrics.
arXiv Detail & Related papers (2020-05-02T03:09:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.