Semi-Supervised Learning Approach to Discover Enterprise User Insights
from Feedback and Support
- URL: http://arxiv.org/abs/2007.09303v3
- Date: Wed, 22 Jul 2020 00:20:23 GMT
- Title: Semi-Supervised Learning Approach to Discover Enterprise User Insights
from Feedback and Support
- Authors: Xin Deng, Ross Smith, Genevieve Quintin
- Abstract summary: We propose and developed an innovative Semi-Supervised Learning approach by utilizing Deep Learning and Topic Modeling.
This approach combines a BERT-based multiclassification algorithm through supervised learning combined with a novel Probabilistic and Semantic Hybrid Topic Inference (PSHTI) Model.
Our system enables mapping the top words to the self-help issues by utilizing domain knowledge about the product through web-crawling.
- Score: 9.66491980663996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the evolution of the cloud and customer centric culture, we inherently
accumulate huge repositories of textual reviews, feedback, and support
data.This has driven enterprises to seek and research engagement patterns, user
network analysis, topic detections, etc.However, huge manual work is still
necessary to mine data to be able to mine actionable outcomes. In this paper,
we proposed and developed an innovative Semi-Supervised Learning approach by
utilizing Deep Learning and Topic Modeling to have a better understanding of
the user voice.This approach combines a BERT-based multiclassification
algorithm through supervised learning combined with a novel Probabilistic and
Semantic Hybrid Topic Inference (PSHTI) Model through unsupervised learning,
aiming at automating the process of better identifying the main topics or areas
as well as the sub-topics from the textual feedback and support.There are three
major break-through: 1. As the advancement of deep learning technology, there
have been tremendous innovations in the NLP field, yet the traditional topic
modeling as one of the NLP applications lag behind the tide of deep learning.
In the methodology and technical perspective, we adopt transfer learning to
fine-tune a BERT-based multiclassification system to categorize the main topics
and then utilize the novel PSHTI model to infer the sub-topics under the
predicted main topics. 2. The traditional unsupervised learning-based topic
models or clustering methods suffer from the difficulty of automatically
generating a meaningful topic label, but our system enables mapping the top
words to the self-help issues by utilizing domain knowledge about the product
through web-crawling. 3. This work provides a prominent showcase by leveraging
the state-of-the-art methodology in the real production to help shed light to
discover user insights and drive business investment priorities.
Related papers
- Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - A look under the hood of the Interactive Deep Learning Enterprise (No-IDLE) [2.7719338074999538]
No-IDLE aims to increase the reach of interactive deep learning solutions for non-experts in machine learning.
One of the key innovations described in this technical report is a methodology for interactive machine learning combined with multimodal interaction.
arXiv Detail & Related papers (2024-06-27T10:01:56Z) - Federated Learning driven Large Language Models for Swarm Intelligence: A Survey [2.769238399659845]
Federated learning (FL) offers a compelling framework for training large language models (LLMs)
We focus on machine unlearning, a crucial aspect for complying with privacy regulations like the Right to be Forgotten.
We explore various strategies that enable effective unlearning, such as perturbation techniques, model decomposition, and incremental learning.
arXiv Detail & Related papers (2024-06-14T08:40:58Z) - Emerging Synergies Between Large Language Models and Machine Learning in
Ecommerce Recommendations [19.405233437533713]
Large language models (LLMs) have superior capabilities in basic tasks of language understanding and generation.
We introduce a representative approach to learning user and item representations using LLM as a feature encoder.
We then reviewed the latest advances in LLMs techniques for collaborative filtering enhanced recommendation systems.
arXiv Detail & Related papers (2024-03-05T08:31:00Z) - Combatting Human Trafficking in the Cyberspace: A Natural Language
Processing-Based Methodology to Analyze the Language in Online Advertisements [55.2480439325792]
This project tackles the pressing issue of human trafficking in online C2C marketplaces through advanced Natural Language Processing (NLP) techniques.
We introduce a novel methodology for generating pseudo-labeled datasets with minimal supervision, serving as a rich resource for training state-of-the-art NLP models.
A key contribution is the implementation of an interpretability framework using Integrated Gradients, providing explainable insights crucial for law enforcement.
arXiv Detail & Related papers (2023-11-22T02:45:01Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - Iterative Zero-Shot LLM Prompting for Knowledge Graph Construction [104.29108668347727]
This paper proposes an innovative knowledge graph generation approach that leverages the potential of the latest generative large language models.
The approach is conveyed in a pipeline that comprises novel iterative zero-shot and external knowledge-agnostic strategies.
We claim that our proposal is a suitable solution for scalable and versatile knowledge graph construction and may be applied to different and novel contexts.
arXiv Detail & Related papers (2023-07-03T16:01:45Z) - Practical and Ethical Challenges of Large Language Models in Education:
A Systematic Scoping Review [5.329514340780243]
Large language models (LLMs) have the potential to automate the laborious process of generating and analysing textual content.
There are concerns regarding the practicality and ethicality of these innovations.
We conducted a systematic scoping review of 118 peer-reviewed papers published since 2017 to pinpoint the current state of research.
arXiv Detail & Related papers (2023-03-17T18:14:46Z) - Deep Active Learning for Computer Vision: Past and Future [50.19394935978135]
Despite its indispensable role for developing AI models, research on active learning is not as intensive as other research directions.
By addressing data automation challenges and coping with automated machine learning systems, active learning will facilitate democratization of AI technologies.
arXiv Detail & Related papers (2022-11-27T13:07:14Z) - Knowledge-Aware Bayesian Deep Topic Model [50.58975785318575]
We propose a Bayesian generative model for incorporating prior domain knowledge into hierarchical topic modeling.
Our proposed model efficiently integrates the prior knowledge and improves both hierarchical topic discovery and document representation.
arXiv Detail & Related papers (2022-09-20T09:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.