Fact-Checking of AI-Generated Reports
- URL: http://arxiv.org/abs/2307.14634v1
- Date: Thu, 27 Jul 2023 05:49:24 GMT
- Title: Fact-Checking of AI-Generated Reports
- Authors: Razi Mahmood, Ge Wang, Mannudeep Kalra, and Pingkun Yan
- Abstract summary: We propose a new method of fact-checking of AI-generated reports using their associated images.
Specifically, the developed examiner differentiates real and fake sentences in reports by learning the association between an image and sentences describing real or potentially fake findings.
- Score: 10.458946019567891
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With advances in generative artificial intelligence (AI), it is now possible
to produce realistic-looking automated reports for preliminary reads of
radiology images. This can expedite clinical workflows, improve accuracy and
reduce overall costs. However, it is also well-known that such models often
hallucinate, leading to false findings in the generated reports. In this paper,
we propose a new method of fact-checking of AI-generated reports using their
associated images. Specifically, the developed examiner differentiates real and
fake sentences in reports by learning the association between an image and
sentences describing real or potentially fake findings. To train such an
examiner, we first created a new dataset of fake reports by perturbing the
findings in the original ground truth radiology reports associated with images.
Text encodings of real and fake sentences drawn from these reports are then
paired with image encodings to learn the mapping to real/fake labels. The
utility of such an examiner is demonstrated for verifying automatically
generated reports by detecting and removing fake sentences. Future generative
AI approaches can use the resulting tool to validate their reports leading to a
more responsible use of AI in expediting clinical workflows.
Related papers
- Contrastive Learning with Counterfactual Explanations for Radiology Report Generation [83.30609465252441]
We propose a textbfCountertextbfFactual textbfExplanations-based framework (CoFE) for radiology report generation.
Counterfactual explanations serve as a potent tool for understanding how decisions made by algorithms can be changed by asking what if'' scenarios.
Experiments on two benchmarks demonstrate that leveraging the counterfactual explanations enables CoFE to generate semantically coherent and factually complete reports.
arXiv Detail & Related papers (2024-07-19T17:24:25Z) - Structural Entities Extraction and Patient Indications Incorporation for Chest X-ray Report Generation [10.46031380503486]
We introduce a novel method, textbfStructural textbfEntities extraction and patient indications textbfIncorporation (SEI) for chest X-ray report generation.
We employ a structural entities extraction (SEE) approach to eliminate presentation-style vocabulary in reports.
We propose a cross-modal fusion network to integrate information from X-ray images, similar historical cases, and patient-specific indications.
arXiv Detail & Related papers (2024-05-23T01:29:47Z) - MedCycle: Unpaired Medical Report Generation via Cycle-Consistency [11.190146577567548]
We introduce an innovative approach that eliminates the need for consistent labeling schemas.
This approach is based on cycle-consistent mapping functions that transform image embeddings into report embeddings.
It outperforms state-of-the-art results in unpaired chest X-ray report generation, demonstrating improvements in both language and clinical metrics.
arXiv Detail & Related papers (2024-03-20T09:40:11Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Multimodal Image-Text Matching Improves Retrieval-based Chest X-Ray
Report Generation [3.6664023341224827]
Contrastive X-Ray REport Match (X-REM) is a novel retrieval-based radiology report generation module.
X-REM uses an image-text matching score to measure the similarity of a chest X-ray image and radiology report for report retrieval.
arXiv Detail & Related papers (2023-03-29T04:00:47Z) - Cross-modal Clinical Graph Transformer for Ophthalmic Report Generation [116.87918100031153]
We propose a Cross-modal clinical Graph Transformer (CGT) for ophthalmic report generation (ORG)
CGT injects clinical relation triples into the visual features as prior knowledge to drive the decoding procedure.
Experiments on the large-scale FFA-IR benchmark demonstrate that the proposed CGT is able to outperform previous benchmark methods.
arXiv Detail & Related papers (2022-06-04T13:16:30Z) - Weakly Supervised Contrastive Learning for Chest X-Ray Report Generation [3.3978173451092437]
Radiology report generation aims at generating descriptive text from radiology images automatically.
A typical setting consists of training encoder-decoder models on image-report pairs with a cross entropy loss.
We propose a novel weakly supervised contrastive loss for medical report generation.
arXiv Detail & Related papers (2021-09-25T00:06:23Z) - News Image Steganography: A Novel Architecture Facilitates the Fake News
Identification [52.83247667841588]
A larger portion of fake news quotes untampered images from other sources with ulterior motives.
This paper proposes an architecture named News Image Steganography to reveal the inconsistency through image steganography based on GAN.
arXiv Detail & Related papers (2021-01-03T11:12:23Z) - Chest X-ray Report Generation through Fine-Grained Label Learning [46.352966049776875]
We present a domain-aware automatic chest X-ray radiology report generation algorithm that learns fine-grained description of findings from images.
We also develop an automatic labeling algorithm for assigning such descriptors to images and build a novel deep learning network that recognizes both coarse and fine-grained descriptions of findings.
arXiv Detail & Related papers (2020-07-27T19:50:56Z) - CLARA: Clinical Report Auto-completion [56.206459591367405]
CLinicit Al it Report it Auto-completion (CLARA) is an interactive method that generates reports in a sentence by sentence fashion based on doctors' anchor words and partially completed sentences.
In our experimental evaluation, CLARA achieved 0.393 CIDEr and 0.248 BLEU-4 on X-ray reports and 0.482 CIDEr and 0.491 BLEU-4 for EEG reports for sentence-level generation.
arXiv Detail & Related papers (2020-02-26T18:45:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.