Deciphering Compatibility Relationships with Textual Descriptions via
Extraction and Explanation
- URL: http://arxiv.org/abs/2312.11554v1
- Date: Sun, 17 Dec 2023 05:45:49 GMT
- Title: Deciphering Compatibility Relationships with Textual Descriptions via
Extraction and Explanation
- Authors: Yu Wang, Zexue He, Zhankui He, Hao Xu, Julian McAuley
- Abstract summary: Pair Fashion Explanation dataset is a unique resource that has been curated to illuminate compatibility relationships.
We propose an innovative two-stage pipeline model that leverages this dataset.
Our experiments showcase the model's potential in crafting descriptions that are knowledgeable, aligned with ground-truth matching correlations, and that produce understandable and informative descriptions.
- Score: 28.91572089512024
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding and accurately explaining compatibility relationships between
fashion items is a challenging problem in the burgeoning domain of AI-driven
outfit recommendations. Present models, while making strides in this area,
still occasionally fall short, offering explanations that can be elementary and
repetitive. This work aims to address these shortcomings by introducing the
Pair Fashion Explanation (PFE) dataset, a unique resource that has been curated
to illuminate these compatibility relationships. Furthermore, we propose an
innovative two-stage pipeline model that leverages this dataset. This
fine-tuning allows the model to generate explanations that convey the
compatibility relationships between items. Our experiments showcase the model's
potential in crafting descriptions that are knowledgeable, aligned with
ground-truth matching correlations, and that produce understandable and
informative descriptions, as assessed by both automatic metrics and human
evaluation. Our code and data are released at
https://github.com/wangyu-ustc/PairFashionExplanation
Related papers
- Relation Rectification in Diffusion Model [64.84686527988809]
We introduce a novel task termed Relation Rectification, aiming to refine the model to accurately represent a given relationship it initially fails to generate.
We propose an innovative solution utilizing a Heterogeneous Graph Convolutional Network (HGCN)
The lightweight HGCN adjusts the text embeddings generated by the text encoder, ensuring the accurate reflection of the textual relation in the embedding space.
arXiv Detail & Related papers (2024-03-29T15:54:36Z) - Entity or Relation Embeddings? An Analysis of Encoding Strategies for Relation Extraction [19.019881161010474]
Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM)
Existing approaches therefore solve the problem in an indirect way: they fine-tune an LM to learn embeddings of the head and tail entities, and then predict the relationship from these entity embeddings.
Our hypothesis in this paper is that relation extraction models can be improved by capturing relationships in a more direct way.
arXiv Detail & Related papers (2023-12-18T09:58:19Z) - Realistic Counterfactual Explanations by Learned Relations [0.0]
We propose a novel approach to realistic counterfactual explanations that preserve relationships between data attributes.
The model directly learns the relationships by a variational auto-encoder without domain knowledge and then learns to disturb the latent space accordingly.
arXiv Detail & Related papers (2022-02-15T12:33:51Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Open Relation Modeling: Learning to Define Relations between Entities [24.04238065663009]
We propose to teach machines to generate definition-like relation descriptions by letting them learn from definitions of entities.
Specifically, we fine-tune Pre-trained Language Models (PLMs) to produce definitions conditioned on extracted entity pairs.
We show that PLMs can select interpretable and informative reasoning paths by confidence estimation, and the selected path can guide PLMs to generate better relation descriptions.
arXiv Detail & Related papers (2021-08-20T16:03:23Z) - Attribute-aware Explainable Complementary Clothing Recommendation [37.30129304097086]
This work aims to tackle the explainability challenge in fashion recommendation tasks by proposing a novel Attribute-aware Fashion Recommender (AFRec)
AFRec recommender assesses the outfit compatibility by explicitly leveraging the extracted attribute-level representations from each item's visual feature.
The attributes serve as the bridge between two fashion items, where we quantify the affinity of a pair of items through the learned compatibility between their attributes.
arXiv Detail & Related papers (2021-07-04T14:56:07Z) - Link Prediction on N-ary Relational Data Based on Relatedness Evaluation [61.61555159755858]
We propose a method called NaLP to conduct link prediction on n-ary relational data.
We represent each n-ary relational fact as a set of its role and role-value pairs.
Experimental results validate the effectiveness and merits of the proposed methods.
arXiv Detail & Related papers (2021-04-21T09:06:54Z) - Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training [49.9995628166064]
We propose CTEG, a model equipped with two mechanisms to learn to decouple easily-confused relations.
On the one hand, an EGA mechanism is introduced to guide the attention to filter out information causing confusion.
On the other hand, a Confusion-Aware Training (CAT) method is proposed to explicitly learn to distinguish relations.
arXiv Detail & Related papers (2020-10-21T11:07:53Z) - Understanding Neural Abstractive Summarization Models via Uncertainty [54.37665950633147]
seq2seq abstractive summarization models generate text in a free-form manner.
We study the entropy, or uncertainty, of the model's token-level predictions.
We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
arXiv Detail & Related papers (2020-10-15T16:57:27Z) - HittER: Hierarchical Transformers for Knowledge Graph Embeddings [85.93509934018499]
We propose Hitt to learn representations of entities and relations in a complex knowledge graph.
Experimental results show that Hitt achieves new state-of-the-art results on multiple link prediction.
We additionally propose a simple approach to integrate Hitt into BERT and demonstrate its effectiveness on two Freebase factoid answering datasets.
arXiv Detail & Related papers (2020-08-28T18:58:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.