Counterfactual Edits for Generative Evaluation
- URL: http://arxiv.org/abs/2303.01555v1
- Date: Thu, 2 Mar 2023 20:10:18 GMT
- Title: Counterfactual Edits for Generative Evaluation
- Authors: Maria Lymperaiou, Giorgos Filandrianos, Konstantinos Thomas, Giorgos
Stamou
- Abstract summary: We propose a framework for the evaluation and explanation of synthesized results based on concepts instead of pixels.
Our framework exploits knowledge-based counterfactual edits that underline which objects or attributes should be inserted, removed, or replaced from generated images.
Global explanations produced by accumulating local edits can also reveal what concepts a model cannot generate in total.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Evaluation of generative models has been an underrepresented field despite
the surge of generative architectures. Most recent models are evaluated upon
rather obsolete metrics which suffer from robustness issues, while being unable
to assess more aspects of visual quality, such as compositionality and logic of
synthesis. At the same time, the explainability of generative models remains a
limited, though important, research direction with several current attempts
requiring access to the inner functionalities of generative models. Contrary to
prior literature, we view generative models as a black box, and we propose a
framework for the evaluation and explanation of synthesized results based on
concepts instead of pixels. Our framework exploits knowledge-based
counterfactual edits that underline which objects or attributes should be
inserted, removed, or replaced from generated images to approach their ground
truth conditioning. Moreover, global explanations produced by accumulating
local edits can also reveal what concepts a model cannot generate in total. The
application of our framework on various models designed for the challenging
tasks of Story Visualization and Scene Synthesis verifies the power of our
approach in the model-agnostic setting.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Visual Analytics for Generative Transformer Models [28.251218916955125]
We present a novel visual analytical framework to support the analysis of transformer-based generative networks.
Our framework is one of the first dedicated to supporting the analysis of transformer-based encoder-decoder models.
arXiv Detail & Related papers (2023-11-21T08:15:01Z) - Controllable Topic-Focused Abstractive Summarization [57.8015120583044]
Controlled abstractive summarization focuses on producing condensed versions of a source article to cover specific aspects.
This paper presents a new Transformer-based architecture capable of producing topic-focused summaries.
arXiv Detail & Related papers (2023-11-12T03:51:38Z) - GenEval: An Object-Focused Framework for Evaluating Text-to-Image
Alignment [26.785655363790312]
We introduce GenEval, an object-focused framework to evaluate compositional image properties.
We show that current object detection models can be leveraged to evaluate text-to-image models.
We then evaluate several open-source text-to-image models and analyze their relative generative capabilities.
arXiv Detail & Related papers (2023-10-17T18:20:03Z) - Model Synthesis for Zero-Shot Model Attribution [26.835046772924258]
generative models are shaping various fields such as art, design, and human-computer interaction.
We propose a model synthesis technique, which generates numerous synthetic models mimicking the fingerprint patterns of real-world generative models.
Our experiments demonstrate that this fingerprint extractor, trained solely on synthetic models, achieves impressive zero-shot generalization on a wide range of real-world generative models.
arXiv Detail & Related papers (2023-07-29T13:00:42Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Operationalizing Specifications, In Addition to Test Sets for Evaluating
Constrained Generative Models [17.914521288548844]
We argue that the scale of generative models could be exploited to raise the abstraction level at which evaluation itself is conducted.
Our recommendations are based on leveraging specifications as a powerful instrument to evaluate generation quality.
arXiv Detail & Related papers (2022-11-19T06:39:43Z) - Translational Concept Embedding for Generalized Compositional Zero-shot
Learning [73.60639796305415]
Generalized compositional zero-shot learning means to learn composed concepts of attribute-object pairs in a zero-shot fashion.
This paper introduces a new approach, termed translational concept embedding, to solve these two difficulties in a unified framework.
arXiv Detail & Related papers (2021-12-20T21:27:51Z) - CARE: Coherent Actionable Recourse based on Sound Counterfactual
Explanations [0.0]
This paper introduces CARE, a modular explanation framework that addresses the model- and user-level desiderata.
As a model-agnostic approach, CARE generates multiple, diverse explanations for any black-box model.
arXiv Detail & Related papers (2021-08-18T15:26:59Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.