Biomedical image analysis competitions: The state of current
participation practice
- URL: http://arxiv.org/abs/2212.08568v2
- Date: Tue, 12 Sep 2023 09:16:46 GMT
- Title: Biomedical image analysis competitions: The state of current
participation practice
- Authors: Matthias Eisenmann, Annika Reinke, Vivienn Weru, Minu Dietlinde
Tizabi, Fabian Isensee, Tim J. Adler, Patrick Godau, Veronika Cheplygina,
Michal Kozubek, Sharib Ali, Anubha Gupta, Jan Kybic, Alison Noble, Carlos
Ortiz de Sol\'orzano, Samiksha Pachade, Caroline Petitjean, Daniel Sage,
Donglai Wei, Elizabeth Wilden, Deepak Alapatt, Vincent Andrearczyk, Ujjwal
Baid, Spyridon Bakas, Niranjan Balu, Sophia Bano, Vivek Singh Bawa, Jorge
Bernal, Sebastian Bodenstedt, Alessandro Casella, Jinwook Choi, Olivier
Commowick, Marie Daum, Adrien Depeursinge, Reuben Dorent, Jan Egger, Hannah
Eichhorn, Sandy Engelhardt, Melanie Ganz, Gabriel Girard, Lasse Hansen,
Mattias Heinrich, Nicholas Heller, Alessa Hering, Arnaud Huaulm\'e, Hyunjeong
Kim, Bennett Landman, Hongwei Bran Li, Jianning Li, Jun Ma, Anne Martel,
Carlos Mart\'in-Isla, Bjoern Menze, Chinedu Innocent Nwoye, Valentin
Oreiller, Nicolas Padoy, Sarthak Pati, Kelly Payette, Carole Sudre, Kimberlin
van Wijnen, Armine Vardazaryan, Tom Vercauteren, Martin Wagner, Chuanbo Wang,
Moi Hoon Yap, Zeyun Yu, Chun Yuan, Maximilian Zenk, Aneeq Zia, David
Zimmerer, Rina Bao, Chanyeol Choi, Andrew Cohen, Oleh Dzyubachyk, Adrian
Galdran, Tianyuan Gan, Tianqi Guo, Pradyumna Gupta, Mahmood Haithami, Edward
Ho, Ikbeom Jang, Zhili Li, Zhengbo Luo, Filip Lux, Sokratis Makrogiannis,
Dominik M\"uller, Young-tack Oh, Subeen Pang, Constantin Pape, Gorkem Polat,
Charlotte Rosalie Reed, Kanghyun Ryu, Tim Scherr, Vajira Thambawita, Haoyu
Wang, Xinliang Wang, Kele Xu, Hung Yeh, Doyeob Yeo, Yixuan Yuan, Yan Zeng,
Xin Zhao, Julian Abbing, Jannes Adam, Nagesh Adluru, Niklas Agethen, Salman
Ahmed, Yasmina Al Khalil, Mireia Aleny\`a, Esa Alhoniemi, Chengyang An, Talha
Anwar, Tewodros Weldebirhan Arega, Netanell Avisdris, Dogu Baran Aydogan,
Yingbin Bai, Maria Baldeon Calisto, Berke Doga Basaran, Marcel Beetz, Cheng
Bian, Hao Bian, Kevin Blansit, Louise Bloch, Robert Bohnsack, Sara
Bosticardo, Jack Breen, Mikael Brudfors, Raphael Br\"ungel, Mariano Cabezas,
Alberto Cacciola, Zhiwei Chen, Yucong Chen, Daniel Tianming Chen, Minjeong
Cho, Min-Kook Choi, Chuantao Xie Chuantao Xie, Dana Cobzas, Julien
Cohen-Adad, Jorge Corral Acero, Sujit Kumar Das, Marcela de Oliveira, Hanqiu
Deng, Guiming Dong, Lars Doorenbos, Cory Efird, Sergio Escalera, Di Fan,
Mehdi Fatan Serj, Alexandre Fenneteau, Lucas Fidon, Patryk Filipiak, Ren\'e
Finzel, Nuno R. Freitas, Christoph M. Friedrich, Mitchell Fulton, Finn Gaida,
Francesco Galati, Christoforos Galazis, Chang Hee Gan, Zheyao Gao, Shengbo
Gao, Matej Gazda, Beerend Gerats, Neil Getty, Adam Gibicar, Ryan Gifford,
Sajan Gohil, Maria Grammatikopoulou, Daniel Grzech, Orhun G\"uley, Timo
G\"unnemann, Chunxu Guo, Sylvain Guy, Heonjin Ha, Luyi Han, Il Song Han, Ali
Hatamizadeh, Tian He, Jimin Heo, Sebastian Hitziger, SeulGi Hong, SeungBum
Hong, Rian Huang, Ziyan Huang, Markus Huellebrand, Stephan Huschauer,
Mustaffa Hussain, Tomoo Inubushi, Ece Isik Polat, Mojtaba Jafaritadi,
SeongHun Jeong, Bailiang Jian, Yuanhong Jiang, Zhifan Jiang, Yueming Jin,
Smriti Joshi, Abdolrahim Kadkhodamohammadi, Reda Abdellah Kamraoui, Inha
Kang, Junghwa Kang, Davood Karimi, April Khademi, Muhammad Irfan Khan,
Suleiman A. Khan, Rishab Khantwal, Kwang-Ju Kim, Timothy Kline, Satoshi
Kondo, Elina Kontio, Adrian Krenzer, Artem Kroviakov, Hugo Kuijf, Satyadwyoom
Kumar, Francesco La Rosa, Abhi Lad, Doohee Lee, Minho Lee, Chiara Lena, Hao
Li, Ling Li, Xingyu Li, Fuyuan Liao, KuanLun Liao, Arlindo Limede Oliveira,
Chaonan Lin, Shan Lin, Akis Linardos, Marius George Linguraru, Han Liu, Tao
Liu, Di Liu, Yanling Liu, Jo\~ao Louren\c{c}o-Silva, Jingpei Lu, Jiangshan
Lu, Imanol Luengo, Christina B. Lund, Huan Minh Luu, Yi Lv, Yi Lv, Uzay
Macar, Leon Maechler, Sina Mansour L., Kenji Marshall, Moona Mazher, Richard
McKinley, Alfonso Medela, Felix Meissen, Mingyuan Meng, Dylan Miller, Seyed
Hossein Mirjahanmardi, Arnab Mishra, Samir Mitha, Hassan Mohy-ud-Din, Tony
Chi Wing Mok, Gowtham Krishnan Murugesan, Enamundram Naga Karthik, Sahil
Nalawade, Jakub Nalepa, Mohamed Naser, Ramin Nateghi, Hammad Naveed,
Quang-Minh Nguyen, Cuong Nguyen Quoc, Brennan Nichyporuk, Bruno Oliveira,
David Owen, Jimut Bahan Pal, Junwen Pan, Wentao Pan, Winnie Pang, Bogyu Park,
Vivek Pawar, Kamlesh Pawar, Michael Peven, Lena Philipp, Tomasz Pieciak,
Szymon Plotka, Marcel Plutat, Fattaneh Pourakpour, Domen Prelo\v{z}nik,
Kumaradevan Punithakumar, Abdul Qayyum, Sandro Queir\'os, Arman Rahmim, Salar
Razavi, Jintao Ren, Mina Rezaei, Jonathan Adam Rico, ZunHyan Rieu, Markus
Rink, Johannes Roth, Yusely Ruiz-Gonzalez, Numan Saeed, Anindo Saha, Mostafa
Salem, Ricardo Sanchez-Matilla, Kurt Schilling, Wei Shao, Zhiqiang Shen,
Ruize Shi, Pengcheng Shi, Daniel Sobotka, Th\'eodore Soulier, Bella Specktor
Fadida, Danail Stoyanov, Timothy Sum Hon Mun, Xiaowu Sun, Rong Tao, Franz
Thaler, Antoine Th\'eberge, Felix Thielke, Helena Torres, Kareem A. Wahid,
Jiacheng Wang, YiFei Wang, Wei Wang, Xiong Wang, Jianhui Wen, Ning Wen, Marek
Wodzinski, Ye Wu, Fangfang Xia, Tianqi Xiang, Chen Xiaofei, Lizhan Xu,
Tingting Xue, Yuxuan Yang, Lin Yang, Kai Yao, Huifeng Yao, Amirsaeed Yazdani,
Michael Yip, Hwanseung Yoo, Fereshteh Yousefirizi, Shunkai Yu, Lei Yu,
Jonathan Zamora, Ramy Ashraf Zeineldin, Dewen Zeng, Jianpeng Zhang, Bokai
Zhang, Jiapeng Zhang, Fan Zhang, Huahong Zhang, Zhongchen Zhao, Zixuan Zhao,
Jiachen Zhao, Can Zhao, Qingshuo Zheng, Yuheng Zhi, Ziqi Zhou, Baosheng Zou,
Klaus Maier-Hein, Paul F. J\"ager, Annette Kopp-Schneider, Lena Maier-Hein
- Abstract summary: We designed a survey to shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis.
The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics.
Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures.
- Score: 143.52578599912326
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The number of international benchmarking competitions is steadily increasing
in various fields of machine learning (ML) research and practice. So far,
however, little is known about the common practice as well as bottlenecks faced
by the community in tackling the research questions posed. To shed light on the
status quo of algorithm development in the specific field of biomedical imaging
analysis, we designed an international survey that was issued to all
participants of challenges conducted in conjunction with the IEEE ISBI 2021 and
MICCAI 2021 conferences (80 competitions in total). The survey covered
participants' expertise and working environments, their chosen strategies, as
well as algorithm characteristics. A median of 72% challenge participants took
part in the survey. According to our results, knowledge exchange was the
primary incentive (70%) for participation, while the reception of prize money
played only a minor role (16%). While a median of 80 working hours was spent on
method development, a large portion of participants stated that they did not
have enough time for method development (32%). 25% perceived the infrastructure
to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of
these, 84% were based on standard architectures. 43% of the respondents
reported that the data samples (e.g., images) were too large to be processed at
once. This was most commonly addressed by patch-based training (69%),
downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks.
K-fold cross-validation on the training set was performed by only 37% of the
participants and only 50% of the participants performed ensembling based on
multiple identical models (61%) or heterogeneous models (39%). 48% of the
respondents applied postprocessing steps.
Related papers
- Collaborative Learning for Annotation-Efficient Volumetric MR Image
Segmentation [5.462792626065119]
The aim of this study is to build a deep learning method exploring sparse annotations, namely only a single 2D slice label for each 3D training MR image.
A collaborative learning method by integrating the strengths of semi-supervised and self-supervised learning schemes was developed.
The proposed method achieved a substantial improvement in segmentation accuracy, increasing the mean B-IoU significantly by more than 10.0% for prostate segmentation.
arXiv Detail & Related papers (2023-12-18T07:02:37Z) - One-Shot Learning for Periocular Recognition: Exploring the Effect of
Domain Adaptation and Data Bias on Deep Representations [59.17685450892182]
We investigate the behavior of deep representations in widely used CNN models under extreme data scarcity for One-Shot periocular recognition.
We improved state-of-the-art results that made use of networks trained with biometric datasets with millions of images.
Traditional algorithms like SIFT can outperform CNNs in situations with limited data.
arXiv Detail & Related papers (2023-07-11T09:10:16Z) - The STOIC2021 COVID-19 AI challenge: applying reusable training
methodologies to private data [60.94672667514737]
This study implements the Type Three (T3) challenge format, which allows for training solutions on private data.
With T3, challenge organizers train a provided by the participants on sequestered training data.
The winning solution obtained an area under the receiver operating characteristic curve for discerning between severe and non-severe COVID-19 of 0.815.
arXiv Detail & Related papers (2023-06-18T05:48:28Z) - Why is the winner the best? [78.74409216961632]
We performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE I SBI 2021 and MICCAI 2021.
Winning solutions typically include the use of multi-task learning (63%), and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and postprocessing (66%)
Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases.
arXiv Detail & Related papers (2023-03-30T21:41:42Z) - Retrospective on the 2021 BASALT Competition on Learning from Human
Feedback [92.37243979045817]
The goal of the competition was to promote research towards agents that use learning from human feedback (LfHF) techniques to solve open-world tasks.
Rather than mandating the use of LfHF techniques, we described four tasks in natural language to be accomplished in the video game Minecraft.
Teams developed a diverse range of LfHF algorithms across a variety of possible human feedback types.
arXiv Detail & Related papers (2022-04-14T17:24:54Z) - Lung-Originated Tumor Segmentation from Computed Tomography Scan (LOTUS)
Benchmark [48.30502612686276]
Lung cancer is one of the deadliest cancers, and its effective diagnosis and treatment depend on the accurate delineation of the tumor.
Human-centered segmentation, which is currently the most common approach, is subject to inter-observer variability.
The 2018 VIP Cup started with a global engagement from 42 countries to access the competition data.
In a nutshell, all the algorithms proposed during the competition, are based on deep learning models combined with a false positive reduction technique.
arXiv Detail & Related papers (2022-01-03T03:06:38Z) - MIcro-Surgical Anastomose Workflow recognition challenge report [12.252332806968756]
"MIcro-Surgical Anastomose recognition on training sessions" (MISAW) challenge provided a data set of 27 sequences of micro-surgical anastomosis on artificial blood vessels.
This data set was composed of videos, kinematics, and workflow annotations described at three different granularity levels: phase, step, and activity.
The best models achieved more than 95% AD-Accuracy for phase recognition, 80% for step recognition, 60% for activity recognition, and 75% for all granularity levels.
arXiv Detail & Related papers (2021-03-24T11:34:09Z) - OpenKBP: The open-access knowledge-based planning grand challenge [0.6157382820537718]
We hosted OpenKBP, a 2020 AAPM Grand Challenge, and challenged participants to develop the best method for predicting the dose of contoured CT images.
The models were evaluated according to two separate scores: (1) dose score, which evaluates the full 3D dose distributions, and (2) dose-volume histogram (DVH) score, which evaluates a set DVH metrics.
The Challenge attracted 195 participants from 28 countries, and 73 of those participants formed 44 teams in the validation phase, which received a total of 1750 submissions.
arXiv Detail & Related papers (2020-11-28T06:45:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.