CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark
- URL: http://arxiv.org/abs/2401.11944v2
- Date: Mon, 18 Mar 2024 09:02:03 GMT
- Title: CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding Benchmark
- Authors: Ge Zhang, Xinrun Du, Bei Chen, Yiming Liang, Tongxu Luo, Tianyu Zheng, Kang Zhu, Yuyang Cheng, Chunpu Xu, Shuyue Guo, Haoran Zhang, Xingwei Qu, Junjie Wang, Ruibin Yuan, Yizhi Li, Zekun Wang, Yudong Liu, Yu-Hsuan Tsai, Fengji Zhang, Chenghua Lin, Wenhao Huang, Wenhu Chen, Jie Fu,
- Abstract summary: We introduce a new Chinese Massive Multi-discipline Multimodal Understanding benchmark designed to evaluate LMMs on tasks demanding college-level subject knowledge and deliberate reasoning in a Chinese context.
CMMMU includes 12k manually collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering.
CMMMU focuses on complex perception and reasoning with domain-specific knowledge in the Chinese context.
- Score: 71.65760716397347
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the capabilities of large multimodal models (LMMs) continue to advance, evaluating the performance of LMMs emerges as an increasing need. Additionally, there is an even larger gap in evaluating the advanced knowledge and reasoning abilities of LMMs in non-English contexts such as Chinese. We introduce CMMMU, a new Chinese Massive Multi-discipline Multimodal Understanding benchmark designed to evaluate LMMs on tasks demanding college-level subject knowledge and deliberate reasoning in a Chinese context. CMMMU is inspired by and strictly follows the annotation and analysis pattern of MMMU. CMMMU includes 12k manually collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering, like its companion, MMMU. These questions span 30 subjects and comprise 39 highly heterogeneous image types, such as charts, diagrams, maps, tables, music sheets, and chemical structures. CMMMU focuses on complex perception and reasoning with domain-specific knowledge in the Chinese context. We evaluate 11 open-source LLMs and one proprietary GPT-4V(ision). Even GPT-4V only achieves accuracies of 42%, indicating a large space for improvement. CMMMU will boost the community to build the next-generation LMMs towards expert artificial intelligence and promote the democratization of LMMs by providing diverse language contexts.
Related papers
- We-Math: Does Your Large Multimodal Model Achieve Human-like Mathematical Reasoning? [11.858791083851447]
We introduce WE-MATH, the first benchmark designed to explore the problem-solving principles beyond end-to-end performance.
We meticulously collect and categorize 6.5K visual math problems, spanning 67 hierarchical knowledge concepts and five layers of knowledge granularity.
We conduct a thorough evaluation of existing LMMs in visual mathematical reasoning and reveal a negative correlation between solving steps and problem-specific performance.
arXiv Detail & Related papers (2024-07-01T13:39:08Z) - GAOKAO-MM: A Chinese Human-Level Benchmark for Multimodal Models
Evaluation [65.268245109828]
Large Vision-Language Models (LVLMs) have demonstrated great abilities in image perception and language understanding.
We propose GAOKAO-MM, a multimodal benchmark based on the Chinese College Entrance Examination (GAOKAO)
We evaluate 10 LVLMs and find that the accuracies of all of them are lower than 50%, with GPT-4-Vison (48.1%), Qwen-VL-Plus (41.2%) and Gemini-Pro-Vision (35.1%) ranking in the top three positions.
arXiv Detail & Related papers (2024-02-24T06:57:15Z) - PathMMU: A Massive Multimodal Expert-Level Benchmark for Understanding and Reasoning in Pathology [14.944207181507135]
We introduce PathMMU, the largest and highest-quality expert-validated pathology benchmark for Large Multimodal Models (LMMs)
It comprises 33,428 multimodal multi-choice questions and 24,067 images from various sources, each accompanied by an explanation for the correct answer.
The construction of PathMMU harnesses GPT-4V's advanced capabilities, utilizing over 30,000 image-caption pairs to enrich captions and generate corresponding Q&As.
arXiv Detail & Related papers (2024-01-29T17:59:19Z) - CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning [16.032320995230734]
We introduce CMMU, a novel benchmark for multi-modal and multi-type question understanding and reasoning in Chinese.
CMMU consists of 3,603 questions in 7 subjects, covering knowledge from primary to high school.
We propose an evaluation strategy called Positional Error Variance for assessing multiple-choice questions.
arXiv Detail & Related papers (2024-01-25T08:22:10Z) - MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI [64.21953221846596]
MMMU is a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks.
Questions span 30 subjects and 183 subfields, comprising 30 highly heterogeneous image types.
The evaluation of 14 open-source LMMs as well as the proprietary GPT-4V(ision) and Gemini highlights the substantial challenges posed by MMMU.
arXiv Detail & Related papers (2023-11-27T17:33:21Z) - CMMLU: Measuring massive multitask language understanding in Chinese [133.70911295934746]
This paper introduces a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and humanities.
CMMLU fills the gap in evaluating the knowledge and reasoning capabilities of large language models within the Chinese context.
arXiv Detail & Related papers (2023-06-15T15:49:51Z) - Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge
Evaluation [61.56563631219381]
We present Xiezhi, the most comprehensive evaluation suite designed to assess holistic domain knowledge.
Xiezhi comprises multiple-choice questions across 516 diverse disciplines ranging from 13 different subjects with 249,587 questions and accompanied by Xiezhi- Specialty and Xiezhi-Interdiscipline, both with 15k questions.
arXiv Detail & Related papers (2023-06-09T09:52:05Z) - M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining
Large Language Models [76.88692952308084]
M3Exam is a benchmark for evaluating large language models (LLMs) in a multilingual, multimodal, and multilevel context.
M3Exam contains 12,317 questions in 9 diverse languages with three educational levels.
We assess the performance of top-performing LLMs on M3Exam and find that current models, including GPT-4, still struggle with multilingual text.
arXiv Detail & Related papers (2023-06-08T13:21:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.