An Efficient Recommendation System in E-commerce using Passer learning
optimization based on Bi-LSTM
- URL: http://arxiv.org/abs/2308.00137v2
- Date: Wed, 2 Aug 2023 07:34:05 GMT
- Title: An Efficient Recommendation System in E-commerce using Passer learning
optimization based on Bi-LSTM
- Authors: Hemn Barzan Abdalla, Awder Ahmed, Bahtiyar Mehmed, Mehdi Gheisari,
Maryam Cheraghy
- Abstract summary: This research develops recommendation in e-commerce using passer learning optimization based on Bi-LSTM.
As compared to earlier methods, the pro-posed PL-optimized Bi-LSTM achieved values of 88.58%, 1.24%, 92.69%, and 92.69% for dataset 1, 88.46%, 0.48%, 92.43%, and 93.47% for dataset 2, and 92.51%, 1.58%, 91.90%, and 90.76% for dataset 3.
- Score: 0.8399688944263843
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommendation system services have become crucial for users to access
personalized goods or services as the global e-commerce market expands. They
can increase business sales growth and lower the cost of user information
exploration. Recent years have seen a signifi-cant increase in researchers
actively using user reviews to solve standard recommender system research
issues. Reviews may, however, contain information that does not help consumers
de-cide what to buy, such as advertising or fictitious or fake reviews. Using
such reviews to offer suggestion services may reduce the effectiveness of those
recommendations. In this research, the recommendation in e-commerce is
developed using passer learning optimization based on Bi-LSTM to solve that
issue (PL optimized Bi-LSTM). Data is first obtained from the product
recommendation dataset and pre-processed to remove any values that are missing
or incon-sistent. Then, feature extraction is performed using TF-IDF features
and features that support graph embedding. Before submitting numerous features
with the same dimensions to the Bi-LSTM classifier for analysis, they are
integrated using the feature concatenation approach. The Collaborative Bi-LSTM
method employs these features to determine if the model is a recommended
product. The PL optimization approach, which efficiently adjusts the
classifier's parameters and produces an extract output that measures the
f1-score, MSE, precision, and recall, is the basis of this research's
contributions. As compared to earlier methods, the pro-posed PL-optimized
Bi-LSTM achieved values of 88.58%, 1.24%, 92.69%, and 92.69% for dataset 1,
88.46%, 0.48%, 92.43%, and 93.47% for dataset 2, and 92.51%, 1.58%, 91.90%, and
90.76% for dataset 3.
Related papers
- Review, Refine, Repeat: Understanding Iterative Decoding of AI Agents with Dynamic Evaluation and Selection [71.92083784393418]
Inference-time methods such as Best-of-N (BON) sampling offer a simple yet effective alternative to improve performance.
We propose Iterative Agent Decoding (IAD) which combines iterative refinement with dynamic candidate evaluation and selection guided by a verifier.
arXiv Detail & Related papers (2025-04-02T17:40:47Z) - Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning [71.2981957820888]
We propose a novel Star-Agents framework, which automates the enhancement of data quality across datasets.
The framework initially generates diverse instruction data with multiple LLM agents through a bespoke sampling method.
The generated data undergo a rigorous evaluation using a dual-model method that assesses both difficulty and quality.
arXiv Detail & Related papers (2024-11-21T02:30:53Z) - Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - Leveraging Large Language Models to Enhance Personalized Recommendations in E-commerce [6.660249346977347]
This study explores the application of large language model (LLM) in personalized recommendation system of e-commerce.
LLM effectively captures the implicit needs of users through deep semantic understanding of user comments and product description data.
The study shows that LLM has significant advantages in the field of personalized recommendation, can improve user experience and promote platform sales growth.
arXiv Detail & Related papers (2024-10-02T13:59:56Z) - Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback [110.16220825629749]
Learning from preference feedback has emerged as an essential step for improving the generation quality and performance of modern language models.
In this work, we identify four core aspects of preference-based learning: preference data, learning algorithm, reward model, and policy training prompts.
Our findings indicate that all aspects are important for performance, with better preference data leading to the largest improvements.
arXiv Detail & Related papers (2024-06-13T16:17:21Z) - Aligning Large Language Models with Self-generated Preference Data [72.99676237703099]
We propose a new framework that boosts the alignment of large language models (LLMs) with human preferences.
Our key idea is leveraging the human prior knowledge within the small (seed) data.
We introduce a noise-aware preference learning algorithm to mitigate the risk of low quality within generated preference data.
arXiv Detail & Related papers (2024-06-06T18:01:02Z) - Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning [55.96599486604344]
We introduce an approach aimed at enhancing the reasoning capabilities of Large Language Models (LLMs) through an iterative preference learning process.
We use Monte Carlo Tree Search (MCTS) to iteratively collect preference data, utilizing its look-ahead ability to break down instance-level rewards into more granular step-level signals.
The proposed algorithm employs Direct Preference Optimization (DPO) to update the LLM policy using this newly generated step-level preference data.
arXiv Detail & Related papers (2024-05-01T11:10:24Z) - Intelligent Classification and Personalized Recommendation of E-commerce Products Based on Machine Learning [2.152073242131379]
The paper explores the significance and application of personalized recommendation systems across e-commerce, content information, and media domains.
It outlines challenges confronting personalized recommendation systems in e-commerce, including data privacy, algorithmic bias, scalability, and the cold start problem.
The paper outlines a personalized recommendation system leveraging the BERT model and nearest neighbor algorithm, specifically tailored to address the exigencies of the eBay e-commerce platform.
arXiv Detail & Related papers (2024-03-28T12:02:45Z) - Multi-level Product Category Prediction through Text Classification [0.0]
This article investigates applying advanced machine learning models, specifically LSTM and BERT, for text classification to predict multiple categories in the retail sector.
The study demonstrates how applying data augmentation techniques and the focal loss function can significantly enhance accuracy in classifying products into multiple categories using a robust Brazilian retail dataset.
arXiv Detail & Related papers (2024-03-03T23:10:36Z) - Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation [56.13803674092712]
We propose an industrial-friendly, expert-aligned and diversity-preserved instruction data selection method: Clustering and Ranking (CaR)
CaR employs a two-step process: first, it ranks instruction pairs using a high-accuracy (84.25%) scoring model aligned with expert preferences; second, it preserves dataset diversity through clustering.
In our experiment, CaR efficiently selected a mere 1.96% of Alpaca's IT data, yet the resulting AlpaCaR model surpassed Alpaca's performance by an average of 32.1% in GPT-4 evaluations.
arXiv Detail & Related papers (2024-02-28T09:27:29Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - SEOpinion: Summarization and Exploration Opinion of E-Commerce Websites [0.0]
This paper proposes a methodology coined as SEOpinion (Summa-rization and Exploration of Opinions)
It provides a summary for the product aspects and spots opinion(s) regarding them, using a combination of templates' information with the customer reviews in two main phases.
To test the feasibility of using Deep Learning-based BERT techniques with our approach, we have created a corpus by gathering information from the top five EC websites for laptops.
arXiv Detail & Related papers (2023-12-12T15:45:58Z) - Democratizing LLMs: An Exploration of Cost-Performance Trade-offs in
Self-Refined Open-Source Models [53.859446823312126]
SoTA open source models of varying sizes from 7B - 65B, on average, improve 8.2% from their baseline performance.
Strikingly, even models with extremely small memory footprints, such as Vicuna-7B, show a 11.74% improvement overall and up to a 25.39% improvement in high-creativity, open ended tasks.
arXiv Detail & Related papers (2023-10-11T15:56:00Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - ItemSage: Learning Product Embeddings for Shopping Recommendations at
Pinterest [60.841761065439414]
At Pinterest, we build a single set of product embeddings called ItemSage to provide relevant recommendations in all shopping use cases.
This approach has led to significant improvements in engagement and conversion metrics, while reducing both infrastructure and maintenance cost.
arXiv Detail & Related papers (2022-05-24T02:28:58Z) - CPFair: Personalized Consumer and Producer Fairness Re-ranking for
Recommender Systems [5.145741425164946]
We present an optimization-based re-ranking approach that seamlessly integrates fairness constraints from both the consumer and producer-side.
We demonstrate through large-scale experiments on 8 datasets that our proposed method is capable of improving both consumer and producer fairness without reducing overall recommendation quality.
arXiv Detail & Related papers (2022-04-17T20:38:02Z) - Exploring Customer Price Preference and Product Profit Role in
Recommender Systems [0.4724825031148411]
We show the impact of manipulating profit awareness of a recommender system.
We propose an adjustment of a predicted ranking for score-based recommender systems.
In the experiments, we show the ability to improve both the precision and the generated recommendations' profit.
arXiv Detail & Related papers (2022-03-13T12:08:06Z) - Deep Learning-based Online Alternative Product Recommendations at Scale [0.2278231643598956]
We use both textual product information (e.g. product titles and descriptions) and customer behavior data to recommend alternative products.
Our results show that the coverage of alternative products is significantly improved in offline evaluations as well as recall and precision.
arXiv Detail & Related papers (2021-04-15T16:27:45Z) - Personalized Embedding-based e-Commerce Recommendations at eBay [3.1236273633321416]
We present an approach for generating personalized item recommendations in an e-commerce marketplace by learning to embed items and users in the same vector space.
Data ablation is incorporated into the offline model training process to improve the robustness of the production system.
arXiv Detail & Related papers (2021-02-11T17:58:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.