PROMINET: Prototype-based Multi-View Network for Interpretable Email
Response Prediction
- URL: http://arxiv.org/abs/2310.16753v1
- Date: Wed, 25 Oct 2023 16:39:00 GMT
- Title: PROMINET: Prototype-based Multi-View Network for Interpretable Email
Response Prediction
- Authors: Yuqing Wang and Prashanth Vijayaraghavan and Ehsan Degan
- Abstract summary: This study proposes a Prototype-based Multi-view Network (PROMINET) that incorporates semantic and structural information from email data.
The model maps learned semantic and structural exemplars to observed samples in the training data at different levels of granularity, such as document, sentence, or phrase.
The learned prototypes also show potential for generating suggestions to enhance email text editing and improve the likelihood of effective email responses.
- Score: 12.727146945870809
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Email is a widely used tool for business communication, and email marketing
has emerged as a cost-effective strategy for enterprises. While previous
studies have examined factors affecting email marketing performance, limited
research has focused on understanding email response behavior by considering
email content and metadata. This study proposes a Prototype-based Multi-view
Network (PROMINET) that incorporates semantic and structural information from
email data. By utilizing prototype learning, the PROMINET model generates
latent exemplars, enabling interpretable email response prediction. The model
maps learned semantic and structural exemplars to observed samples in the
training data at different levels of granularity, such as document, sentence,
or phrase. The approach is evaluated on two real-world email datasets: the
Enron corpus and an in-house Email Marketing corpus. Experimental results
demonstrate that the PROMINET model outperforms baseline models, achieving a
~3% improvement in F1 score on both datasets. Additionally, the model provides
interpretability through prototypes at different granularity levels while
maintaining comparable performance to non-interpretable models. The learned
prototypes also show potential for generating suggestions to enhance email text
editing and improve the likelihood of effective email responses. This research
contributes to enhancing sender-receiver communication and customer engagement
in email interactions.
Related papers
- Few-shot learning for automated content analysis: Efficient coding of
arguments and claims in the debate on arms deliveries to Ukraine [0.9576975587953563]
Pre-trained language models (PLM) based on transformer neural networks offer great opportunities to improve automatic content analysis in communication science.
Three characteristics so far impeded the widespread adoption of the methods in the applying disciplines: the dominance of English language models in NLP research, the necessary computing resources, and the effort required to produce training data to fine-tune PLMs.
We test our approach on a realistic use case from communication science to automatically detect claims and arguments together with their stance in the German news debate on arms deliveries to Ukraine.
arXiv Detail & Related papers (2023-12-28T11:39:08Z) - RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation [53.4319652364256]
This paper presents the RefSAM model, which explores the potential of SAM for referring video object segmentation.
Our proposed approach adapts the original SAM model to enhance cross-modality learning by employing a lightweight Cross-RValModal.
We employ a parameter-efficient tuning strategy to align and fuse the language and vision features effectively.
arXiv Detail & Related papers (2023-07-03T13:21:58Z) - Pre-trained Language Models for Keyphrase Generation: A Thorough
Empirical Study [76.52997424694767]
We present an in-depth empirical study of keyphrase extraction and keyphrase generation using pre-trained language models.
We show that PLMs have competitive high-resource performance and state-of-the-art low-resource performance.
Further results show that in-domain BERT-like PLMs can be used to build strong and data-efficient keyphrase generation models.
arXiv Detail & Related papers (2022-12-20T13:20:21Z) - Entity-Graph Enhanced Cross-Modal Pretraining for Instance-level Product
Retrieval [152.3504607706575]
This research aims to conduct weakly-supervised multi-modal instance-level product retrieval for fine-grained product categories.
We first contribute the Product1M datasets, and define two real practical instance-level retrieval tasks.
We exploit to train a more effective cross-modal model which is adaptively capable of incorporating key concept information from the multi-modal data.
arXiv Detail & Related papers (2022-06-17T15:40:45Z) - Email Spam Detection Using Hierarchical Attention Hybrid Deep Learning
Method [0.0]
This article proposes a novel technique for email spam detection based on a combination of convolutional neural networks, recurrent units, and attention gated mechanisms.
The proposed technique's findings are compared to those of state-of-the-art models and show that our approach outperforms them.
arXiv Detail & Related papers (2022-04-15T09:02:36Z) - A pipeline and comparative study of 12 machine learning models for text
classification [0.0]
Text-based communication is highly favoured as a communication method, especially in business environments.
Many machine learning methods for text classification have been proposed and incorporated into the services of most email providers.
However, optimising text classification algorithms and finding the right tradeoff on their aggressiveness is still a major research problem.
arXiv Detail & Related papers (2022-04-04T23:51:22Z) - Modelling Direct Messaging Networks with Multiple Recipients for Cyber
Deception [13.447335354083666]
We propose a framework to automate the generation of email and instant messaging-style group communications at scale.
We address two key aspects of simulating this type of system: modelling when and with whom participants communicate, and generating topical, multi-party text to populate simulated conversation threads.
We demonstrate the use of fine-tuned, pre-trained language models to generate convincing multi-party conversation threads.
arXiv Detail & Related papers (2021-11-21T10:18:48Z) - LDNet: Unified Listener Dependent Modeling in MOS Prediction for
Synthetic Speech [67.88748572167309]
We present LDNet, a unified framework for mean opinion score (MOS) prediction.
We propose two inference methods that provide more stable results and efficient computation.
arXiv Detail & Related papers (2021-10-18T08:52:31Z) - Finding top performers through email patterns analysis [0.0]
This study combines social network and semantic analysis to identify top performers based on email communication.
Top performers tend to assume central network positions and have high responsiveness to emails.
In email contents, top performers use more positive and complex language, with low emotionality, but rich in influential words that are probably reused by co-workers.
arXiv Detail & Related papers (2021-05-27T09:45:02Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Learning with Weak Supervision for Email Intent Detection [56.71599262462638]
We propose to leverage user actions as a source of weak supervision to detect intents in emails.
We develop an end-to-end robust deep neural network model for email intent identification.
arXiv Detail & Related papers (2020-05-26T23:41:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.