ACTesting: Automated Cross-modal Testing Method of Text-to-Image Software
- URL: http://arxiv.org/abs/2312.12933v3
- Date: Sat, 11 Jan 2025 12:41:00 GMT
- Title: ACTesting: Automated Cross-modal Testing Method of Text-to-Image Software
- Authors: Siqi Gu, Chunrong Fang, Quanjun Zhang, Zhenyu Chen,
- Abstract summary: ACTesting is an automated cross-modal testing method for Text-to-Image software.
In our experiments, ACTesting effectively generates error-revealing tests, resulting in a decrease in text-image consistency by up to 20%.
Results validate that ACTesting can reliably identify errors within T2I software.
- Score: 9.351572210564134
- License:
- Abstract: Recently, creative generative artificial intelligence software has emerged as a pivotal assistant, enabling users to generate content and seek inspiration rapidly. Text-to-Image (T2I) software, one of the most widely used, synthesizes images with text input by engaging in a cross-modal process. However, despite substantial advancements in the T2I engine, T2I software still encounters errors when generating complex or non-realistic scenes, including omitting focal entities, low image realism, and mismatched text-image information. The cross-modal nature of T2I software complicates error detection for traditional testing methods, and the absence of test oracles further exacerbates the complexity of the testing process. To fill this gap, we propose ACTesting, an Automated Cross-modal Testing Method of Text-to-Image Software, the first testing method explicitly designed for T2I software. ACTesting utilizes the metamorphic testing principle to address the oracle problem and identifies cross-modal semantic consistency as its fundamental Metamorphic relation (MR) by employing the Entity-relationship (ER) triples. We design three kinds of mutation operators under the guidance of MR and the adaptability density constraint to construct the new input text. After generating the images based on the text, ACTesting verifies whether MR is satisfied by detecting the ER triples across two modalities to detect the errors of T2I software. In our experiments across five popular T2I software, ACTesting effectively generates error-revealing tests, resulting in a decrease in text-image consistency by up to 20% when compared to the baseline. Additionally, an ablation study demonstrates the efficacy of the proposed mutation operators. The experimental results validate that ACTesting can reliably identify errors within T2I software.
Related papers
- MoTaDual: Modality-Task Dual Alignment for Enhanced Zero-shot Composed Image Retrieval [20.612534837883892]
Composed Image Retrieval (CIR) is a challenging vision-language task, utilizing bi-modal (image+text) queries to retrieve target images.
In this paper, we propose a two-stage framework to tackle both discrepancies.
MoTaDual achieves the state-of-the-art performance across four widely used ZS-CIR benchmarks, while maintaining low training time and computational cost.
arXiv Detail & Related papers (2024-10-31T08:49:05Z) - FINEMATCH: Aspect-based Fine-grained Image and Text Mismatch Detection and Correction [66.98008357232428]
We propose FineMatch, a new aspect-based fine-grained text and image matching benchmark.
FineMatch focuses on text and image mismatch detection and correction.
We show that models trained on FineMatch demonstrate enhanced proficiency in detecting fine-grained text and image mismatches.
arXiv Detail & Related papers (2024-04-23T03:42:14Z) - Who Evaluates the Evaluations? Objectively Scoring Text-to-Image Prompt Coherence Metrics with T2IScoreScore (TS2) [62.44395685571094]
We introduce T2IScoreScore, a curated set of semantic error graphs containing a prompt and a set of increasingly erroneous images.
These allow us to rigorously judge whether a given prompt faithfulness metric can correctly order images with respect to their objective error count.
We find that the state-of-the-art VLM-based metrics fail to significantly outperform simple (and supposedly worse) feature-based metrics like CLIPScore.
arXiv Detail & Related papers (2024-04-05T17:57:16Z) - Improving Text-to-Image Consistency via Automatic Prompt Optimization [26.2587505265501]
We introduce a T2I optimization-by-prompting framework, OPT2I, to improve prompt-image consistency in T2I models.
Our framework starts from a user prompt and iteratively generates revised prompts with the goal of maximizing a consistency score.
arXiv Detail & Related papers (2024-03-26T15:42:01Z) - SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with
Auto-Generated Data [73.23388142296535]
SELMA improves the faithfulness of T2I models by fine-tuning models on automatically generated, multi-skill image-text datasets.
We show that SELMA significantly improves the semantic alignment and text faithfulness of state-of-the-art T2I diffusion models on multiple benchmarks.
We also show that fine-tuning with image-text pairs auto-collected via SELMA shows comparable performance to fine-tuning with ground truth data.
arXiv Detail & Related papers (2024-03-11T17:35:33Z) - ProTIP: Probabilistic Robustness Verification on Text-to-Image Diffusion Models against Stochastic Perturbation [18.103478658038846]
Text-to-Image (T2I) Diffusion Models (DMs) have shown impressive abilities in generating high-quality images based on simple text descriptions.
As is common with many Deep Learning (DL) models, DMs are subject to a lack of robustness.
We introduce a probabilistic notion of T2I DMs' robustness; and then establish an efficient framework, ProTIP, to evaluate it with statistical guarantees.
arXiv Detail & Related papers (2024-02-23T16:48:56Z) - FacTool: Factuality Detection in Generative AI -- A Tool Augmented
Framework for Multi-Task and Multi-Domain Scenarios [87.12753459582116]
A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models.
We propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models.
arXiv Detail & Related papers (2023-07-25T14:20:51Z) - Code-Switching Text Generation and Injection in Mandarin-English ASR [57.57570417273262]
We investigate text generation and injection for improving the performance of an industry commonly-used streaming model, Transformer-Transducer (T-T)
We first propose a strategy to generate code-switching text data and then investigate injecting generated text into T-T model explicitly by Text-To-Speech (TTS) conversion or implicitly by tying speech and text latent spaces.
Experimental results on the T-T model trained with a dataset containing 1,800 hours of real Mandarin-English code-switched speech show that our approaches to inject generated code-switching text significantly boost the performance of T-T models.
arXiv Detail & Related papers (2023-03-20T09:13:27Z) - Text to Image Generation with Semantic-Spatial Aware GAN [41.73685713621705]
A text to image generation (T2I) model aims to generate photo-realistic images which are semantically consistent with the text descriptions.
We propose a novel framework Semantic-Spatial Aware GAN, which is trained in an end-to-end fashion so that the text encoder can exploit better text information.
arXiv Detail & Related papers (2021-04-01T15:48:01Z) - TIME: Text and Image Mutual-Translation Adversarial Networks [55.1298552773457]
We propose Text and Image Mutual-Translation Adversarial Networks (TIME)
TIME learns a T2I generator G and an image captioning discriminator D under the Generative Adversarial Network framework.
In experiments, TIME achieves state-of-the-art (SOTA) performance on the CUB and MS-COCO dataset.
arXiv Detail & Related papers (2020-05-27T06:40:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.