Uncertainty Quantification in Table Structure Recognition
- URL: http://arxiv.org/abs/2407.01731v1
- Date: Mon, 1 Jul 2024 19:03:55 GMT
- Title: Uncertainty Quantification in Table Structure Recognition
- Authors: Kehinde Ajayi, Leizhen Zhang, Yi He, Jian Wu,
- Abstract summary: This paper proposes a method for uncertainty quantification (UQ) of table structure recognition (TSR)
Our key idea is to enrich and diversify the table representations, to spotlight the cells with high recognition uncertainties.
Cell complexity quantification gauges the uncertainty of each cell by its topological relation with neighboring cells.
- Score: 6.328777177761948
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantifying uncertainties for machine learning models is a critical step to reduce human verification effort by detecting predictions with low confidence. This paper proposes a method for uncertainty quantification (UQ) of table structure recognition (TSR). The proposed UQ method is built upon a mixture-of-expert approach termed Test-Time Augmentation (TTA). Our key idea is to enrich and diversify the table representations, to spotlight the cells with high recognition uncertainties. To evaluate the effectiveness, we proposed two heuristics to differentiate highly uncertain cells from normal cells, namely, masking and cell complexity quantification. Masking involves varying the pixel intensity to deem the detection uncertainty. Cell complexity quantification gauges the uncertainty of each cell by its topological relation with neighboring cells. The evaluation results based on standard benchmark datasets demonstrate that the proposed method is effective in quantifying uncertainty in TSR models. To our best knowledge, this study is the first of its kind to enable UQ in TSR tasks. Our code and data are available at: https://github.com/lamps-lab/UQTTA.git.
Related papers
- Addressing Uncertainty in LLMs to Enhance Reliability in Generative AI [47.64301863399763]
We present a dynamic semantic clustering approach inspired by the Chinese Restaurant Process.
We quantify uncertainty of Large Language Models (LLMs) on a given query by calculating entropy of the generated semantic clusters.
We propose leveraging the (negative) likelihood of these clusters as the (non)conformity score within Conformal Prediction framework.
arXiv Detail & Related papers (2024-11-04T18:49:46Z) - Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness [106.52630978891054]
We present a taxonomy of uncertainty specific to vision-language AI systems.
We also introduce a new metric confidence-weighted accuracy, that is well correlated with both accuracy and calibration error.
arXiv Detail & Related papers (2024-07-02T04:23:54Z) - Cycles of Thought: Measuring LLM Confidence through Stable Explanations [53.15438489398938]
Large language models (LLMs) can reach and even surpass human-level accuracy on a variety of benchmarks, but their overconfidence in incorrect responses is still a well-documented failure mode.
We propose a framework for measuring an LLM's uncertainty with respect to the distribution of generated explanations for an answer.
arXiv Detail & Related papers (2024-06-05T16:35:30Z) - Uncertainty Quantification for Bird's Eye View Semantic Segmentation: Methods and Benchmarks [10.193504550494486]
This paper introduces a benchmark for predictive uncertainty quantification in BEV segmentation.
It focuses on the effectiveness of predicted uncertainty in identifying misclassified and out-of-distribution pixels, as well as calibration.
We propose the Uncertainty-Focal-Cross-Entropy loss, designed for highly imbalanced data, which consistently improves the segmentation quality and calibration.
arXiv Detail & Related papers (2024-05-31T16:32:46Z) - Word-Sequence Entropy: Towards Uncertainty Estimation in Free-Form Medical Question Answering Applications and Beyond [52.246494389096654]
This paper introduces Word-Sequence Entropy (WSE), a method that calibrates uncertainty at both the word and sequence levels.
We compare WSE with six baseline methods on five free-form medical QA datasets, utilizing seven popular large language models (LLMs)
arXiv Detail & Related papers (2024-02-22T03:46:08Z) - Echoes of Socratic Doubt: Embracing Uncertainty in Calibrated Evidential Reinforcement Learning [1.7898305876314982]
The proposed algorithm combines deep evidential learning with quantile calibration based on principles of conformal inference.
It is tested on a suite of miniaturized Atari games (i.e., MinAtar)
arXiv Detail & Related papers (2024-02-11T05:17:56Z) - Uncertainty-aware Language Modeling for Selective Question Answering [107.47864420630923]
We present an automatic large language model (LLM) conversion approach that produces uncertainty-aware LLMs.
Our approach is model- and data-agnostic, is computationally-efficient, and does not rely on external models or systems.
arXiv Detail & Related papers (2023-11-26T22:47:54Z) - Lightweight, Uncertainty-Aware Conformalized Visual Odometry [2.429910016019183]
Data-driven visual odometry (VO) is a critical subroutine for autonomous edge robotics.
Emerging edge robotics devices like insect-scale drones and surgical robots lack a computationally efficient framework to estimate VO's predictive uncertainties.
This paper presents a novel, lightweight, and statistically robust framework that leverages conformal inference (CI) to extract VO's uncertainty bands.
arXiv Detail & Related papers (2023-03-03T20:37:55Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Word-level confidence estimation for RNN transducers [7.12355127219356]
We present a lightweight neural confidence model tailored for Automatic Speech Recognition (ASR) system with Recurrent Network Transducers (RNN-T)
Compared to other existing approaches, our model utilizes: (a) the time information associated with recognized words, which reduces the computational complexity, and (b) a simple and elegant trick for mapping between sub-word and word sequences.
arXiv Detail & Related papers (2021-09-28T18:38:00Z) - Gradient-Based Quantification of Epistemic Uncertainty for Deep Object
Detectors [8.029049649310213]
We introduce novel gradient-based uncertainty metrics and investigate them for different object detection architectures.
Experiments show significant improvements in true positive / false positive discrimination and prediction of intersection over union.
We also find improvement over Monte-Carlo dropout uncertainty metrics and further significant boosts by aggregating different sources of uncertainty metrics.
arXiv Detail & Related papers (2021-07-09T16:04:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.