Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language
Models
- URL: http://arxiv.org/abs/2312.17661v1
- Date: Fri, 29 Dec 2023 15:57:49 GMT
- Title: Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language
Models
- Authors: Yuqing Wang, Yun Zhao
- Abstract summary: Google introduced Gemini, a cutting-edge MLLM designed specifically for multimodal integration.
Despite its advancements, preliminary benchmarks indicate that Gemini lags behind GPT models in commonsense reasoning tasks.
This study undertakes a thorough evaluation of Gemini's performance in complex reasoning tasks.
- Score: 14.30980373935713
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The burgeoning interest in Multimodal Large Language Models (MLLMs), such as
OpenAI's GPT-4V(ision), has significantly impacted both academic and industrial
realms. These models enhance Large Language Models (LLMs) with advanced visual
understanding capabilities, facilitating their application in a variety of
multimodal tasks. Recently, Google introduced Gemini, a cutting-edge MLLM
designed specifically for multimodal integration. Despite its advancements,
preliminary benchmarks indicate that Gemini lags behind GPT models in
commonsense reasoning tasks. However, this assessment, based on a limited
dataset (i.e., HellaSWAG), does not fully capture Gemini's authentic
commonsense reasoning potential. To address this gap, our study undertakes a
thorough evaluation of Gemini's performance in complex reasoning tasks that
necessitate the integration of commonsense knowledge across modalities. We
carry out a comprehensive analysis of 12 commonsense reasoning datasets,
ranging from general to domain-specific tasks. This includes 11 datasets
focused solely on language, as well as one that incorporates multimodal
elements. Our experiments across four LLMs and two MLLMs demonstrate Gemini's
competitive commonsense reasoning capabilities. Additionally, we identify
common challenges faced by current LLMs and MLLMs in addressing commonsense
problems, underscoring the need for further advancements in enhancing the
commonsense reasoning abilities of these models.
Related papers
- The Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio [118.75449542080746]
This paper presents the first systematic investigation of hallucinations in large multimodal models (LMMs)
Our study reveals two key contributors to hallucinations: overreliance on unimodal priors and spurious inter-modality correlations.
Our findings highlight key vulnerabilities, including imbalances in modality integration and biases from training data, underscoring the need for balanced cross-modal learning.
arXiv Detail & Related papers (2024-10-16T17:59:02Z) - A Comprehensive Review of Multimodal Large Language Models: Performance and Challenges Across Different Tasks [74.52259252807191]
Multimodal Large Language Models (MLLMs) address the complexities of real-world applications far beyond the capabilities of single-modality systems.
This paper systematically sorts out the applications of MLLM in multimodal tasks such as natural language, vision, and audio.
arXiv Detail & Related papers (2024-08-02T15:14:53Z) - Quantifying and Mitigating Unimodal Biases in Multimodal Large Language Models: A Causal Perspective [9.633811630889237]
We propose a causal framework to interpret the biases in Visual Question Answering (VQA) problems.
We introduce a novel dataset with 12,000 challenging VQA instances requiring multi-hop reasoning.
Our experiments show that MLLMs perform poorly on MORE, indicating strong unimodal biases and limited semantic understanding.
arXiv Detail & Related papers (2024-03-27T08:38:49Z) - Mipha: A Comprehensive Overhaul of Multimodal Assistant with Small Language Models [25.724995114710165]
We investigate the design aspects of Multimodal Small Language Models (MSLMs) and propose an efficient multimodal assistant named Mipha.
Our Mipha-3B outperforms the state-of-the-art large MLLMs, especially LLaVA-1.5-13B, on multiple benchmarks.
arXiv Detail & Related papers (2024-03-10T12:43:27Z) - Multimodal Large Language Models to Support Real-World Fact-Checking [80.41047725487645]
Multimodal large language models (MLLMs) carry the potential to support humans in processing vast amounts of information.
While MLLMs are already being used as a fact-checking tool, their abilities and limitations in this regard are understudied.
We propose a framework for systematically assessing the capacity of current multimodal models to facilitate real-world fact-checking.
arXiv Detail & Related papers (2024-03-06T11:32:41Z) - The Curious Case of Nonverbal Abstract Reasoning with Multi-Modal Large Language Models [19.213774611556]
Multi-modal large language models (MLLMs) integrate verbal and visual information.
Despite the revolutionizing prospect of MLLMs, our understanding of their reasoning abilities is limited.
In this study, we assess the nonverbal abstract reasoning abilities of open-source and closed-source MLLMs.
arXiv Detail & Related papers (2024-01-22T16:57:05Z) - A Challenger to GPT-4V? Early Explorations of Gemini in Visual Expertise [78.54563675327198]
Gemini is Google's newest and most capable MLLM built from the ground up for multi-modality.
Can Gemini challenge GPT-4V's leading position in multi-modal learning?
We compare Gemini Pro with the state-of-the-art GPT-4V to evaluate its upper limits, along with the latest open-sourced MLLM, Sphinx.
arXiv Detail & Related papers (2023-12-19T18:59:22Z) - A Survey on Multimodal Large Language Models [71.63375558033364]
Multimodal Large Language Model (MLLM) represented by GPT-4V has been a new rising research hotspot.
This paper aims to trace and summarize the recent progress of MLLMs.
arXiv Detail & Related papers (2023-06-23T15:21:52Z) - D$^2$TV: Dual Knowledge Distillation and Target-oriented Vision Modeling
for Many-to-Many Multimodal Summarization [113.72253589338472]
Many-to-many multimodal summarization (M$3$S) task aims to generate summaries in any language with document inputs in any language and the corresponding image sequence.
We propose a dual knowledge distillation and target-oriented vision modeling framework for the M$3$S task.
arXiv Detail & Related papers (2023-05-22T06:47:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.