An xAI Approach for Data-to-Text Processing with ASP
- URL: http://arxiv.org/abs/2308.15898v1
- Date: Wed, 30 Aug 2023 09:09:09 GMT
- Title: An xAI Approach for Data-to-Text Processing with ASP
- Authors: Alessandro Dal Pal\`u (Universit\`a di Parma, Italy), Agostino Dovier
(Universit\`a di Udine, Italy), Andrea Formisano (Universit\`a di Udine,
Italy)
- Abstract summary: This paper presents a framework that is compliant with xAI requirements.
The text description is hierarchically organized, in a top-down structure where text is enriched with further details.
The generation of natural language descriptions' structure is also managed by logic rules.
- Score: 39.58317527488534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The generation of natural language text from data series gained renewed
interest among AI research goals. Not surprisingly, the few proposals in the
state of the art are based on training some system, in order to produce a text
that describes and that is coherent to the data provided as input. Main
challenges of such approaches are the proper identification of "what" to say
(the key descriptive elements to be addressed in the data) and "how" to say:
the correspondence and accuracy between data and text, the presence of
contradictions/redundancy in the text, the control of the amount of synthesis.
This paper presents a framework that is compliant with xAI requirements. In
particular we model ASP/Python programs that enable an explicit control of
accuracy errors and amount of synthesis, with proven optimal solutions. The
text description is hierarchically organized, in a top-down structure where
text is enriched with further details, according to logic rules. The generation
of natural language descriptions' structure is also managed by logic rules.
Related papers
- HILL: Hierarchy-aware Information Lossless Contrastive Learning for Hierarchical Text Classification [12.930158528823524]
This paper investigates the feasibility of a contrastive learning scheme in which the semantic and syntactic information inherent in the input sample is adequately reserved.
A structure encoder takes the document embedding as input, extracts the essential syntactic information inherent in the label hierarchy, and injects the syntactic information into the text representation.
Experiments on three common datasets are conducted to verify the superiority of HILL.
arXiv Detail & Related papers (2024-03-26T01:29:17Z) - Text2Data: Low-Resource Data Generation with Textual Control [104.38011760992637]
Natural language serves as a common and straightforward control signal for humans to interact seamlessly with machines.
We propose Text2Data, a novel approach that utilizes unlabeled data to understand the underlying data distribution through an unsupervised diffusion model.
It undergoes controllable finetuning via a novel constraint optimization-based learning objective that ensures controllability and effectively counteracts catastrophic forgetting.
arXiv Detail & Related papers (2024-02-08T03:41:39Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text
Generation [102.20036684996248]
We propose MURMUR, a neuro-symbolic modular approach to text generation from semi-structured data with multi-step reasoning.
We conduct experiments on two data-to-text generation tasks like WebNLG and LogicNLG.
arXiv Detail & Related papers (2022-12-16T17:36:23Z) - The Whole Truth and Nothing But the Truth: Faithful and Controllable
Dialogue Response Generation with Dataflow Transduction and Constrained
Decoding [65.34601470417967]
We describe a hybrid architecture for dialogue response generation that combines the strengths of neural language modeling and rule-based generation.
Our experiments show that this system outperforms both rule-based and learned approaches in human evaluations of fluency, relevance, and truthfulness.
arXiv Detail & Related papers (2022-09-16T09:00:49Z) - LOGEN: Few-shot Logical Knowledge-Conditioned Text Generation with
Self-training [76.90793623822866]
We propose a unified framework for logical knowledge-conditioned text generation in the few-shot setting.
Our approach leverages self-training and samples pseudo logical forms based on content and structure consistency.
arXiv Detail & Related papers (2021-12-02T16:49:41Z) - AGGGEN: Ordering and Aggregating while Generating [12.845842212733695]
We present AGGGEN, a data-to-text model which re-introduces two explicit sentence planning stages into neural data-to-text systems.
AGGGEN performs sentence planning at the same time as generating text by learning latent alignments between input representation and target text.
arXiv Detail & Related papers (2021-06-10T08:14:59Z) - Matching Text with Deep Mutual Information Estimation [0.0]
We present a neural approach for general-purpose text matching with deep mutual information estimation incorporated.
Our approach, Text matching with Deep Info Max (TIM), is integrated with a procedure of unsupervised learning of representations.
We evaluate our text matching approach on several tasks including natural language inference, paraphrase identification, and answer selection.
arXiv Detail & Related papers (2020-03-09T15:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.