MultiMedEval: A Benchmark and a Toolkit for Evaluating Medical
Vision-Language Models
- URL: http://arxiv.org/abs/2402.09262v2
- Date: Fri, 16 Feb 2024 16:36:00 GMT
- Title: MultiMedEval: A Benchmark and a Toolkit for Evaluating Medical
Vision-Language Models
- Authors: Corentin Royer, Bjoern Menze and Anjany Sekuboyina
- Abstract summary: MultiMedEval is an open-source toolkit for fair and reproducible evaluation of large, medical vision-language models (VLM)
It comprehensively assesses the models' performance on a broad array of six multi-modal tasks, conducted over 23 datasets, and spanning over 11 medical domains.
We open-source a Python toolkit with a simple interface and setup process, enabling the evaluation of any VLM in just a few lines of code.
- Score: 1.3535643703577176
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce MultiMedEval, an open-source toolkit for fair and reproducible
evaluation of large, medical vision-language models (VLM). MultiMedEval
comprehensively assesses the models' performance on a broad array of six
multi-modal tasks, conducted over 23 datasets, and spanning over 11 medical
domains. The chosen tasks and performance metrics are based on their widespread
adoption in the community and their diversity, ensuring a thorough evaluation
of the model's overall generalizability. We open-source a Python toolkit
(github.com/corentin-ryr/MultiMedEval) with a simple interface and setup
process, enabling the evaluation of any VLM in just a few lines of code. Our
goal is to simplify the intricate landscape of VLM evaluation, thus promoting
fair and uniform benchmarking of future models.
Related papers
- LMMs-Eval: Reality Check on the Evaluation of Large Multimodal Models [71.8065384742686]
LMMS-EVAL is a unified and standardized multimodal benchmark framework with over 50 tasks and more than 10 models.
LMMS-EVAL LITE is a pruned evaluation toolkit that emphasizes both coverage and efficiency.
Multimodal LIVEBENCH utilizes continuously updating news and online forums to assess models' generalization abilities in the wild.
arXiv Detail & Related papers (2024-07-17T17:51:53Z) - VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models [78.76009461738299]
We present an open-source toolkit for evaluating large multi-modality models based on PyTorch.
VLMEvalKit implements over 70 different large multi-modality models, including both proprietary APIs and open-source models.
We host OpenVLM Leaderboard to track the progress of multi-modality learning research.
arXiv Detail & Related papers (2024-07-16T13:06:15Z) - Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language Models [17.643421997037514]
We propose a novel framework that tackles both discriminative and generative multimodal medical tasks.
The learning of Med-MoE consists of three steps: multimodal medical alignment, instruction tuning and routing, and domain-specific MoE tuning.
Our model can achieve performance superior to or on par with state-of-the-art baselines.
arXiv Detail & Related papers (2024-04-16T02:35:17Z) - MM-BigBench: Evaluating Multimodal Models on Multimodal Content
Comprehension Tasks [56.60050181186531]
We introduce MM-BigBench, which incorporates a diverse range of metrics to offer an extensive evaluation of the performance of various models and instructions.
Our paper evaluates a total of 20 language models (14 MLLMs) on 14 multimodal datasets spanning 6 tasks, with 10 instructions for each task, and derives novel insights.
arXiv Detail & Related papers (2023-10-13T11:57:04Z) - MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities [159.9847317300497]
We propose MM-Vet, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks.
Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes.
arXiv Detail & Related papers (2023-08-04T17:59:47Z) - MultiZoo & MultiBench: A Standardized Toolkit for Multimodal Deep
Learning [110.54752872873472]
MultiZoo is a public toolkit consisting of standardized implementations of > 20 core multimodal algorithms.
MultiBench is a benchmark spanning 15 datasets, 10 modalities, 20 prediction tasks, and 6 research areas.
arXiv Detail & Related papers (2023-06-28T17:59:10Z) - Evaluation of Multi-indicator And Multi-organ Medical Image Segmentation
Models [4.302265156822829]
"U-shaped" neural networks with encoder and decoder structures have gained popularity in the field of medical image segmentation.
We propose a comprehensive method for evaluating medical image segmentation models for multi-indicator and multi-organ.
MIMO offers novel insights into multi-indicator and multi-organ medical image evaluation.
arXiv Detail & Related papers (2023-06-01T08:35:51Z) - MvCo-DoT:Multi-View Contrastive Domain Transfer Network for Medical
Report Generation [42.804058630251305]
We propose the first multi-view medical report generation model, called MvCo-DoT.
MvCo-DoT first propose a multi-view contrastive learning (MvCo) strategy to help the deep reinforcement learning based model utilize the consistency of multi-view inputs.
Extensive experiments on the IU X-Ray public dataset show that MvCo-DoT outperforms the SOTA medical report generation baselines in all metrics.
arXiv Detail & Related papers (2023-04-15T03:42:26Z) - ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented
Visual Models [102.63817106363597]
We build ELEVATER, the first benchmark to compare and evaluate pre-trained language-augmented visual models.
It consists of 20 image classification datasets and 35 object detection datasets, each of which is augmented with external knowledge.
We will release our toolkit and evaluation platforms for the research community.
arXiv Detail & Related papers (2022-04-19T10:23:42Z) - MELINDA: A Multimodal Dataset for Biomedical Experiment Method
Classification [14.820951153262685]
We introduce a new dataset, MELINDA, for Multimodal biomEdicaL experImeNt methoD clAssification.
The dataset is collected in a fully automated distant supervision manner, where the labels are obtained from an existing curated database.
We benchmark various state-of-the-art NLP and computer vision models, including unimodal models which only take either caption texts or images as inputs.
arXiv Detail & Related papers (2020-12-16T19:11:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.