Exploring OCR Capabilities of GPT-4V(ision) : A Quantitative and
In-depth Evaluation
- URL: http://arxiv.org/abs/2310.16809v2
- Date: Sun, 29 Oct 2023 10:59:21 GMT
- Title: Exploring OCR Capabilities of GPT-4V(ision) : A Quantitative and
In-depth Evaluation
- Authors: Yongxin Shi, Dezhi Peng, Wenhui Liao, Zening Lin, Xinhong Chen,
Chongyu Liu, Yuyi Zhang, Lianwen Jin
- Abstract summary: The evaluation reveals that GPT-4V performs well in recognizing and understanding Latin contents, but struggles with multilingual scenarios and complex tasks.
In general, despite its versatility in handling diverse OCR tasks, GPT-4V does not outperform existing state-of-the-art OCR models.
- Score: 33.66939971907121
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a comprehensive evaluation of the Optical Character
Recognition (OCR) capabilities of the recently released GPT-4V(ision), a Large
Multimodal Model (LMM). We assess the model's performance across a range of OCR
tasks, including scene text recognition, handwritten text recognition,
handwritten mathematical expression recognition, table structure recognition,
and information extraction from visually-rich document. The evaluation reveals
that GPT-4V performs well in recognizing and understanding Latin contents, but
struggles with multilingual scenarios and complex tasks. Specifically, it
showed limitations when dealing with non-Latin languages and complex tasks such
as handwriting mathematical expression recognition, table structure
recognition, and end-to-end semantic entity recognition and pair extraction
from document image. Based on these observations, we affirm the necessity and
continued research value of specialized OCR models. In general, despite its
versatility in handling diverse OCR tasks, GPT-4V does not outperform existing
state-of-the-art OCR models. How to fully utilize pre-trained general-purpose
LMMs such as GPT-4V for OCR downstream tasks remains an open problem. The study
offers a critical reference for future research in OCR with LMMs. Evaluation
pipeline and results are available at
https://github.com/SCUT-DLVCLab/GPT-4V_OCR.
Related papers
- Benchmarking Vision-Language Models on Optical Character Recognition in Dynamic Video Environments [3.5936169218390703]
This paper introduces an open-source benchmark for evaluating Vision-Language Models (VLMs) on Optical Character Recognition (OCR) tasks in dynamic video environments.
We present a curated dataset containing 1,477 manually annotated frames spanning diverse domains, including code editors, news broadcasts, YouTube videos, and advertisements.
arXiv Detail & Related papers (2025-02-10T13:20:19Z) - CC-OCR: A Comprehensive and Challenging OCR Benchmark for Evaluating Large Multimodal Models in Literacy [50.78228433498211]
CC-OCR comprises four OCR-centric tracks: multi-scene text reading, multilingual text reading, document parsing, and key information extraction.
It includes 39 subsets with 7,058 full annotated images, of which 41% are sourced from real applications, and released for the first time.
We evaluate nine prominent LMMs and reveal both the strengths and weaknesses of these models, particularly in text grounding, multi-orientation, and hallucination of repetition.
arXiv Detail & Related papers (2024-12-03T07:03:25Z) - See then Tell: Enhancing Key Information Extraction with Vision Grounding [54.061203106565706]
We introduce STNet (See then Tell Net), a novel end-to-end model designed to deliver precise answers with relevant vision grounding.
To enhance the model's seeing capabilities, we collect extensive structured table recognition datasets.
arXiv Detail & Related papers (2024-09-29T06:21:05Z) - DLoRA-TrOCR: Mixed Text Mode Optical Character Recognition Based On Transformer [12.966765239586994]
Multi- fonts, mixed scenes and complex layouts seriously affect the recognition accuracy of traditional OCR models.
We propose a parameter-efficient mixed text recognition method based on pre-trained OCR Transformer, namely DLoRA-TrOCR.
arXiv Detail & Related papers (2024-04-19T09:28:16Z) - mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document
Understanding [55.4806974284156]
Document understanding refers to automatically extract, analyze and comprehend information from digital documents, such as a web page.
Existing Multi-model Large Language Models (MLLMs) have demonstrated promising zero-shot capabilities in shallow OCR-free text recognition.
arXiv Detail & Related papers (2023-07-04T11:28:07Z) - OCRBench: On the Hidden Mystery of OCR in Large Multimodal Models [122.27878464009181]
We conducted a comprehensive evaluation of Large Multimodal Models, such as GPT4V and Gemini, in various text-related visual tasks.
OCRBench contains 29 datasets, making it the most comprehensive OCR evaluation benchmark available.
arXiv Detail & Related papers (2023-05-13T11:28:37Z) - TransDocs: Optical Character Recognition with word to word translation [2.2336243882030025]
This research work focuses on improving the optical character recognition (OCR) with ML techniques.
This work is based on ANKI dataset for English to Spanish translation.
arXiv Detail & Related papers (2023-04-15T21:40:14Z) - User-Centric Evaluation of OCR Systems for Kwak'wala [92.73847703011353]
We show that utilizing OCR reduces the time spent in the manual transcription of culturally valuable documents by over 50%.
Our results demonstrate the potential benefits that OCR tools can have on downstream language documentation and revitalization efforts.
arXiv Detail & Related papers (2023-02-26T21:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.